Last Updated: 12 May 2026

Social Engineering

Social engineering is the most reliable attack vector against digital asset institutions. Not because the technology is weak, but because human trust is exploitable at scale. Nation-state actors, organised criminal groups, and opportunistic fraudsters have all converged on the same truth: it is cheaper to compromise a person than a system.

For organisations whose security posture rests on individual vigilance alone, the threat landscape is genuinely alarming. For those built on structural controls, it is manageable.

This paper examines real-world attack chains, explains why generic awareness training falls short, and sets out the architectural defences that shift the balance.

Why Digital Assets Are a Prime Target

The characteristics that make digital assets powerful also make them attractive to attackers. Transactions are irreversible. Operations are global and often conducted remotely. Communication norms favour speed and informality. These conditions create an environment where impersonation is unusually easy and verification is unusually hard.

This is not a theoretical concern. The most sophisticated social engineering campaigns in the world are now directed at this industry, and the attackers are nation-state operators with effectively unlimited patience and resources.


Advanced Tactics: What Is Actually Happening


Why Generic Advice is Not Enough

Most social engineering guidance amounts to “be vigilant,” “verify requests,” and “think before you click.” This advice is not wrong, but it is insufficient against the tradecraft described above. Telling an engineer to “be sceptical of urgency” does not help when the attack looks exactly like a normal job interview. Telling a finance team to “verify identities” does not help when the face on the video call is a pixel-perfect deepfake of their own CFO.



The problem is structural, not individual. The solution must be too. When someone refuses to act on an unverified request and instead reconnects through a trusted medium, they are not simply “being cautious.” They are forcing the interaction onto ground where the threat is contained and managed, where identity is verified through controls the organisation owns rather than cues the attacker can fabricate. The deception does not need to be detected, it needs to be made irrelevant.


What Actually Works: Eliminating Attack Paths

The most effective defence does not depend on perfect detection. It depends on eliminating entire attack paths so that even a successful deception cannot reach its objective.


Recognition Is Not Authentication

A familiar face on a video call is not proof of identity. A message from a known contact on a messaging platform is not a verified instruction. A professional profile with mutual connections is not a background check.

Every individual within an organisation should feel empowered to disengage from any interaction that seems untrustworthy, even if it appears to come from senior leadership – especially if it appears to come from senior leadership. Requests issued through unfamiliar or unverified channels should not be negotiated or clarified in place. The correct approach is to disengage and reconnect via a trusted, pre-established medium.


“We Will Never” Statements

Organisations should publish and enforce explicit boundaries:

“We will never ask you to install software or extensions during a video call.”

“We will never send signing instructions via messaging apps or unverified channels.”

“We will never ask candidates to execute commands on repositories not hosted on our verified organisational infrastructure.”

These statements do two things. They give people a bright line to enforce, and they collapse the attacker’s options. A deepfaked executive asking someone to download software is no longer a judgement call. It is a policy violation, full stop.


Leadership-Backed Empowerment

In environments where authority can override controls, attackers thrive. The BlueNoroff deepfake attack worked precisely because the victim believed they were speaking to their own leadership and felt unable to refuse.

Security policy must be backed by explicit, visible leadership commitment. No one, regardless of seniority, is exempt from verification procedures. If a CEO’s instruction cannot survive a callback on a verified channel, it should not be followed. This is not bureaucracy. It is the single most effective countermeasure against authority-based social engineering.


Technical Controls That Remove Human Judgement

Where possible, remove the human from the critical path entirely:

Device attestation: Communications from unmanaged devices treated as untrusted regardless of apparent sender

Endpoint controls: Preventing execution of unsigned code, reducing effectiveness of SE-delivered malware

Out-of-band transaction verification: Signing requests confirmed through a separate, pre-authenticated channel

Hardware-enforced MFA (FIDO2/WebAuthn): Phishing-resistant by design, not by vigilance

Mandatory cooling-off periods: For high-value or unusual operations



Komainu’s Perspective

At Komainu, social engineering defence is not a training module. It is an architectural principle. Our custody operations are designed so that no single human interaction, regardless of how convincing, can authorise a sensitive action without independent technical verification.

The threat is real, it is sophisticated, and it is specifically targeting this industry. But it is not insurmountable. Organisations that are structurally built for this, where controls are absolute and authority never overrides verification, deny social engineering the leverage it depends on. Those relying on awareness alone are relying on luck.


Key Takeaways

We integrate these principles across our operations:

  • Layered custody controls that require multi-party authorisation through technically verified channels.
  • Explicit “we will never” policies communicated to all staff and partners.
  • Regular adversarial simulations that test not just whether people click links, but whether the organisation’s structure would survive a sophisticated, multi-stage campaign.
  • A security culture where challenging authority is expected, not merely permitted.