By Cameron Coller

Security Engineer, Komainu

Artificial intelligence has become a defining feature of modern security operations. Yet most discussions still treat it as a single concept rather than two distinct realities: how it is used within security teams, and how it is used against them.

The key thing to understand here is that distinction matters. AI is neither inherently defensive nor inherently dangerous. Its impact depends entirely on how it is applied, governed, and constrained.

Part I: AI inside security – augmentation, not replacement

AI is poorly suited to deterministic problems. Those remain the domain of structured controls such as policy enforcement, identity assurance, access boundaries, and preventative safeguards. Its strength lies instead in non-deterministic work: interpreting ambiguity, identifying context, prioritising response, and supporting decisions under uncertainty.

That is where AI delivers real value. Modern environments generate vast amounts of telemetry, including authentication events, behavioural data, endpoint activity and infrastructure signals. The challenge is no longer collecting data but interpreting it quickly enough to act.

AI serves as a context engine. It links weak signals across systems, filters noise, and delivers relevant insight at the moment decisions must be made. Instead of reviewing raw alerts, security teams receive distilled intelligence: what changed, why it matters, and where attention is needed now. This is not about autonomy – analysts remain accountable for outcomes, while AI provides the scale and speed they need to keep pace. Humans still exercise the judgement; machines extend their reach.

Orchestration, MCP, and agentic workflows

The longer-term potential of AI lies not in content generation, but in orchestration. Model Context Protocols (MCP) create a structured interface between AI systems and operational environments, effectively a controlled API that allows models or agents to access specific tools, data and live context.

This enables bounded agentic workflows: gathering evidence, enriching incidents, testing hypotheses and preparing response options in real time. The potential is significant, but the capability is still developing. Without strict scoping and governance, agentic systems can fail in new ways, from cascading hallucinations to prompt injection and context manipulation.

Transparency, validation and human oversight remain essential. When used deliberately, AI accelerates insight; when used without discipline, it amplifies error.

Part II: AI outside security – offence, but not inevitability

From a defensive perspective, AI does not introduce a fundamentally new threat class. Security teams already contend with sustained phishing, impersonation, and targeted social engineering, including activity associated with advanced persistent threat groups and nation-state actors. In reality, much of what is now described as “AI-powered” attack activity is simply the automation of long-established techniques.

AI lowers the barrier to entry by increasing volume rather than sophistication. Large language models typically reproduce existing phishing formats instead of inventing new ones. For organisations with mature controls, such as strong identity verification, behavioural baselines and contextual detection, these attempts are largely visible, containable and often routine.

Synthetic authority without leverage

Voice and video impersonation are often cited as one of AI’s most worrying developments. Although technically sophisticated, its real-world influence is often overstated.

Recognition is not authentication, and authority is not proof. Every individual within an organisation should feel empowered to disengage from any interaction that seems untrustworthy, even if it appears to come from senior leadership. Requests issued through unfamiliar or unverified channels should not be negotiated or clarified in place. The correct approach is to disengage and reconnect via a trusted medium.

This speaks to a broader principle: the most effective defence does not depend on perfect detection but on eliminating entire attack paths. Clear policy frameworks, explicit “we will never” statements, technical verification measures, and leadership-backed empowerment remove the leverage that social engineering relies on.

In environments where authority can override controls, attackers thrive. Where controls are absolute and leadership takes a clear stance, social engineering quickly loses power. AI does not fundamentally alter this dynamic; it merely accelerates whichever outcomes the existing structures support.

Intelligence with accountability

AI is neither a unique threat nor a universal solution. Internally, it is a powerful augmentation layer for analysis and decision-making under uncertainty. Externally, it is an accelerant of known attack patterns rather than a step-change in adversary capability.

For organisations built on informal authority, exception-driven culture, or unclear policy, AI can magnify risk. For those grounded in proof, process, and explicit empowerment, it becomes a force multiplier.

The difference is not the presence of AI, but the strength of the foundations it operates on. When intelligence is paired with governance, and automation with accountability, AI becomes an asset rather than a liability, and entire classes of attack disappear rather than needing to be managed.