Last Updated: 12 May 2026
Social Engineering
Social engineering is the most reliable attack vector against digital asset institutions. Not because the technology is weak, but because human trust is exploitable at scale. Nation-state actors, organised criminal groups, and opportunistic fraudsters have all converged on the same truth: it is cheaper to compromise a person than a system.
For organisations whose security posture rests on individual vigilance alone, the threat landscape is genuinely alarming. For those built on structural controls, it is manageable.
This paper examines real-world attack chains, explains why generic awareness training falls short, and sets out the architectural defences that shift the balance.
Why Digital Assets Are a Prime Target
The characteristics that make digital assets powerful also make them attractive to attackers. Transactions are irreversible. Operations are global and often conducted remotely. Communication norms favour speed and informality. These conditions create an environment where impersonation is unusually easy and verification is unusually hard.
This is not a theoretical concern. The most sophisticated social engineering campaigns in the world are now directed at this industry, and the attackers are nation-state operators with effectively unlimited patience and resources.
Advanced Tactics: What Is Actually Happening
Tactic 01: The IT Worker Infiltration Programme
Beyond targeting existing employees, DPRK operatives, linked to the Democratic People’s Republic of Korea (North Korea), are placing themselves as employees inside Western companies. Using fabricated identities, AI face-swap software during video interviews, and domestic “laptop farms” to simulate local employment, North Korean IT workers have infiltrated over a hundred companies, including Fortune 500 firms.
In 2025 and 2026, multiple US nationals were sentenced to lengthy federal prison terms for facilitating the scheme, which leveraged stolen identities of at least eighty Americans. The broader programme is estimated by US authorities to generate hundreds of millions of dollars annually for the North Korean government.
The risk to digital asset firms is acute. These are not contractors doing inconsequential work. When one security firm unknowingly hired a DPRK operative, the individual installed malware on their first day. In a custody environment, that kind of access could reach source code, internal communications, infrastructure credentials, and signing systems.
Tactic 02: Industrialised Fake Recruitment — DPRK / Lazarus Group
North Korean state-backed actors have turned fake recruitment into a repeatable, scalable attack vector. In 2025, the FBI seized the domain of a front company registered in New Mexico by DPRK-affiliated actors tracked as the “Contagious Interview” subgroup of Lazarus. At least two additional front companies formed part of the same campaign, each maintaining professional websites and AI-generated employee personas using tools like Remaker to produce convincing photographs.
The attack is effective because it never asks the victim to do anything unusual. Targets are approached through normal job platforms by profiles that pass casual inspection. Interviews are conducted professionally. Malware is delivered through routine developer workflows: cloning a code repository and running a standard setup command that any engineer has executed thousands of times before.
In January 2026, Fireblocks CEO Michael Shaulov described the mechanics to CNBC. Attackers impersonated Fireblocks recruiters, conducted professional video interviews, and assigned coding tasks through legitimate-looking repositories. Running the setup commands triggered hidden malware. Shaulov noted that attackers specifically targeted engineers with access to custody infrastructure, signing systems, or deployment pipelines.
The sophistication has evolved sharply. Fireblocks found that fake campaign materials referenced real business events announced only weeks earlier. As Shaulov noted, in 2017 DPRK actors were easy to spot. Now, their materials are indistinguishable from the real thing.
Tactic 03: Deepfake Video Calls – BlueNoroff
In mid-2025, the BlueNoroff cluster, a Lazarus subgroup specialising in cryptocurrency theft since at least 2017, conducted a multi-stage intrusion against an employee at a cryptocurrency foundation. Huntress, an independent cybersecurity firm specialising in threat detection and incident response, published the detailed technical analysis and attributed the attack with high confidence.
The chain began with a message on a popular messaging platform several weeks before the attack, establishing rapport and requesting a meeting. A scheduling link redirected to an attacker-controlled domain mimicking a legitimate conferencing platform. On the call itself, multiple deepfakes of the victim’s own senior leadership were present, lending apparent authority to the interaction.
During the call, the victim was told their microphone was not working and was directed to download what appeared to be a conferencing extension. The file was a script that silently installed a modular malware suite, a backdoor, keylogger, screen recorder, and an infostealer designed to harvest credentials from over twenty-five browser-based crypto wallets. The attackers employed antiforensic techniques, overwriting deployed files to hinder investigation.
This was not a phishing email with a suspicious link. It was a choreographed performance with real-time social pressure, deepfaked faces of people the victim personally recognised, and malware delivery disguised as routine troubleshooting.
Why Generic Advice is Not Enough
Most social engineering guidance amounts to “be vigilant,” “verify requests,” and “think before you click.” This advice is not wrong, but it is insufficient against the tradecraft described above. Telling an engineer to “be sceptical of urgency” does not help when the attack looks exactly like a normal job interview. Telling a finance team to “verify identities” does not help when the face on the video call is a pixel-perfect deepfake of their own CFO.
If your security posture depends on every person making the right call every time, you do not have a security posture. You have crossed fingers.
The problem is structural, not individual. The solution must be too. When someone refuses to act on an unverified request and instead reconnects through a trusted medium, they are not simply “being cautious.” They are forcing the interaction onto ground where the threat is contained and managed, where identity is verified through controls the organisation owns rather than cues the attacker can fabricate. The deception does not need to be detected, it needs to be made irrelevant.

What Actually Works: Eliminating Attack Paths
The most effective defence does not depend on perfect detection. It depends on eliminating entire attack paths so that even a successful deception cannot reach its objective.
Recognition Is Not Authentication
A familiar face on a video call is not proof of identity. A message from a known contact on a messaging platform is not a verified instruction. A professional profile with mutual connections is not a background check.
Every individual within an organisation should feel empowered to disengage from any interaction that seems untrustworthy, even if it appears to come from senior leadership – especially if it appears to come from senior leadership. Requests issued through unfamiliar or unverified channels should not be negotiated or clarified in place. The correct approach is to disengage and reconnect via a trusted, pre-established medium.
“We Will Never” Statements
Organisations should publish and enforce explicit boundaries:
“We will never ask you to install software or extensions during a video call.”
“We will never send signing instructions via messaging apps or unverified channels.”
“We will never ask candidates to execute commands on repositories not hosted on our verified organisational infrastructure.”
These statements do two things. They give people a bright line to enforce, and they collapse the attacker’s options. A deepfaked executive asking someone to download software is no longer a judgement call. It is a policy violation, full stop.
Leadership-Backed Empowerment
In environments where authority can override controls, attackers thrive. The BlueNoroff deepfake attack worked precisely because the victim believed they were speaking to their own leadership and felt unable to refuse.
Security policy must be backed by explicit, visible leadership commitment. No one, regardless of seniority, is exempt from verification procedures. If a CEO’s instruction cannot survive a callback on a verified channel, it should not be followed. This is not bureaucracy. It is the single most effective countermeasure against authority-based social engineering.
Technical Controls That Remove Human Judgement
Where possible, remove the human from the critical path entirely:
Trust Boundary
Device attestation: Communications from unmanaged devices treated as untrusted regardless of apparent sender
Execution Control
Endpoint controls: Preventing execution of unsigned code, reducing effectiveness of SE-delivered malware
Verification
Out-of-band transaction verification: Signing requests confirmed through a separate, pre-authenticated channel
Authentication
Hardware-enforced MFA (FIDO2/WebAuthn): Phishing-resistant by design, not by vigilance
Execution Control
Mandatory cooling-off periods: For high-value or unusual operations
The AI Acceleration Problem
AI does not fundamentally change the social engineering threat model. It accelerates whichever outcomes the existing structures support.
If your organisation relies on people spotting fakes, AI makes your position worse. Deepfakes are cheaper, synthetic profile photographs generated in seconds, and the linguistic tells that once betrayed nation-state actors have disappeared entirely. If your organisation relies on structural controls that make deception irrelevant, AI changes very little.
The correct response to AI-enhanced social engineering is not better AI detection. It is ensuring that no amount of impersonation can bypass the controls that protect critical operations.
Komainu’s Perspective
At Komainu, social engineering defence is not a training module. It is an architectural principle. Our custody operations are designed so that no single human interaction, regardless of how convincing, can authorise a sensitive action without independent technical verification.
The threat is real, it is sophisticated, and it is specifically targeting this industry. But it is not insurmountable. Organisations that are structurally built for this, where controls are absolute and authority never overrides verification, deny social engineering the leverage it depends on. Those relying on awareness alone are relying on luck.
Key Takeaways
We integrate these principles across our operations:
- Layered custody controls that require multi-party authorisation through technically verified channels.
- Explicit “we will never” policies communicated to all staff and partners.
- Regular adversarial simulations that test not just whether people click links, but whether the organisation’s structure would survive a sophisticated, multi-stage campaign.
- A security culture where challenging authority is expected, not merely permitted.
What Should Clients Ask?
- How is identity verified across non‑trusted channels?
- Which “we will never” policies are enforced in practice?
- What safeguards apply if an account or device is compromised?
- What prevents social‑engineered requests from triggering custody actions?
