AI, Crypto, and the Threat That Wasn't

Edge Capital | April 24, 2026

The thesis that the development of AI creates the conditions for large-scale compromise of the crypto space is not supported by the data of 2026, based on our research. The largest incidents of 2026 are the result of targeted operations requiring substantial time and expertise. The role of AI in these operations remains auxiliary.


A simplified narrative has taken hold in the industry: that the development of generative models has qualitatively transformed the threat landscape for blockchain protocols. According to this view, the accessibility of AI tooling has lowered the barrier to attack to a level at which the crypto space becomes fundamentally indefensible. This interpretation requires refinement.

Analysis of 2026 incidents conducted by Edge Capital supports a different position.

The largest losses in the crypto space over the past twelve months are the result of multi-month targeted operations relying on social engineering and supply chain attacks. The role of AI in these incidents remains auxiliary. At the same time, AI is becoming a meaningful instrument of defense — across monitoring, audit, risk management, and active response.

This article systematizes the evidentiary basis for that position.


1. Loss Structure in 2026

Aggregate DeFi sector and crypto industry losses for the first four months of 2026 exceeded $750 million according to data from DefiLlama and PeckShield. Two incidents — Drift Protocol ($285 million, April 1) and Kelp DAO ($293 million, April 19) — surpassed the size of any individual DeFi exploit recorded in 2023 or 2024.

The distribution of losses by attack vector demonstrates a consistent pattern. Per estimates from Chainalysis and independent analytical groups, infrastructure-level attacks — private key compromise, social engineering, and frontend hijacking — account for approximately 76% of aggregate damage in early 2026. Pure smart contract exploits represent a smaller share of losses, despite the disproportionate attention they receive in public discourse.

This structure indicates that the sector's principal risks lie not in the domain of automated exploit generation, but in operational security and the architecture of trust.


2. Analysis of Key Incidents

Drift Protocol: A Six-Month Operation

The attack on Drift Protocol on April 1, 2026 is an example of a sustained targeted operation. According to the team's published post-mortem, the attackers — preliminarily linked to DPRK-affiliated groups — established contact with Drift contributors at industry conferences while posing as a quantitative trading firm.

Over the course of six months, the attackers maintained communication via Telegram, working sessions, and in-person meetings at international events. They opened a vault on the platform, deposited over one million dollars of real capital, participated in strategic discussions with the team, and gradually built trust with several members of the protocol's Security Council.

The technical phase of the attack required the use of Solana's durable nonces feature to obtain pre-signed transactions transferring administrative control. The final extraction of funds — $285 million in USDC, SOL, and ETH — was executed within twelve minutes using a CVT token, specifically created on March 12, 2026, as fictitious collateral.

No role for AI in this operation has been documented. The attackers' principal resources were time, operational discipline, and expert understanding of the protocol's architecture.

Bybit: Supply Chain Compromise

The theft of $1.5 billion from the Bybit exchange on February 21, 2025 remains the largest incident in the history of the crypto industry. The attack did not target Bybit's infrastructure directly. The object of compromise was the third-party multisig provider Safe{Wallet}.

The attackers compromised the workstation of a Safe{Wallet} developer through social engineering, obtained AWS session tokens, and bypassed multi-factor authentication. Through the compromised access to the AWS environment, malicious JavaScript code was injected into the Safe{Wallet} user interface, targeting Bybit transactions exclusively. During a routine cold-to-hot wallet transfer, the interface displayed the correct destination address while the smart contract executed a transaction transferring control of the wallet to the attackers.

The FBI formally attributed the operation to the TraderTraitor subdivision of the Lazarus Group. The incident represents a textbook example of a supply chain attack of the type executed by APT-level actors over many years.

Kelp DAO: Exploitation of the Trust Architecture

The Kelp DAO incident of April 19, 2026, which resulted in the extraction of $293 million, required deep understanding of cross-chain communication mechanics. The attacker deceived the LayerZero EndpointV2 contract into processing a forged instruction as a legitimate cross-chain message. As a result, the Kelp bridge released 116,500 rsETH tokens directly to the attacker — approximately 18% of total circulating supply.

The exploit was based on specialized expertise in restaking architecture and LayerZero, not on broadly available tools for automated generation.


3. The Documented Influence of AI on the Threat Landscape

Acknowledging the limited role of AI in the largest incidents does not amount to denying its influence on the structure of threats. That influence exists, but its character differs from what is commonly assumed.

Lowering the barrier for mass-volume attacks. The low-tier attack segment — phishing campaigns, wallet drainers, exploits of forks of obscure protocols — has shown a significant increase in activity. Based on analytics of darknet forums and Telegram channels, a new class of offerings oriented toward non-professional participants has taken shape. The result is not a qualitative sophistication of attacks, but a quantitative increase in the baseline level of threats.

Expansion of the attack surface through AI tooling. Research published in April 2026 by a team from UC Santa Barbara, UC San Diego, Fuzzland, and World Liberty Financial documents 26 active malicious LLM routers performing malicious tool call injection and credential theft. In one verified case, a crypto wallet of $500,000 was drained. The compromise of the LiteLLM package on PyPI in March 2026 demonstrated that the AI stack inherits the vulnerabilities of open-source supply chains, with a substantially larger blast radius.

Artifacts of AI generation in exploits. In the Balancer V2 exploit of $128 million, executed on November 3, 2025, the malicious smart contract contained debugging console.log instructions — a characteristic indicator of LLM generation that a qualified developer would have removed prior to deployment. The attack achieved its objective, but the artifacts themselves indicate that the use of AI does not eliminate the need for human expertise on the attacker's side.

In aggregate, AI alters the economics and scale of existing attack vectors but does not produce fundamentally new ways of compromising financial protocols.


4. AI in the Architecture of Defense

The symmetric reinforcement of the defensive side warrants separate examination, as it remains underrepresented in public discourse.

Real-time on-chain anomaly monitoring. During the Balancer attack of November 3, 2025, Check Point's automated blockchain analytics systems detected anomalous outflows from the Vault contract within minutes. Solutions from Chainalysis, TRM Labs, Forta, and Hypernative provide round-the-clock monitoring at a density unattainable by human teams.

Audit and formal verification. AI-assisted static analysis tools are capable of processing codebases substantially faster than traditional manual audit. Their role is not to replace experts but to delegate routine identification of known vulnerability patterns, freeing qualified resources to work on complex invariants.

Active response and protocol pause. In the Kelp DAO case, the protocol-wide pause activated 46 minutes after the first successful exploit transaction blocked two subsequent attempts to extract an additional $100 million. Response speed of this order is unattainable without AI-augmented monitoring systems.

Dynamic risk management. Risk assessment systems for wallets, counterparties, and assets, built on machine learning models trained on on-chain data, are forming in the DeFi sector over the course of months — against the decades required by traditional finance. This influences the operation of oracles, bridges, and lending protocols at the level of base architectural decisions.

The position articulated by Vitalik Buterin — that LLMs may serve as a supplementary layer for approximating intent in security systems but should not be used as the sole line of defense — represents a methodologically sound framework for assessing the current capabilities of AI.


5. Practical Implications for the Industry

The analysis of 2026 incidents allows for the formulation of three recommendations for protocols, funds, and institutional market participants.

Correct prioritization of risk. The principal share of DeFi threats in the current period stems from fundamental problems discussed for years: opaque multisig processes, rounding errors in smart contracts, insufficient protection of private keys, trust in third-party interfaces. The proliferation of AI does not eliminate the need for systematic work on these problems but raises the cost of ignoring them.

Symmetric investment in AI. Protocols that use AI exclusively in the product layer while retaining a manual approach to security find themselves in a losing position. The contemporary DeFi protocol security stack must include automated on-chain anomaly monitoring, AI-assisted audit, dynamic risk systems, and protocol-wide pauses with response thresholds measured in minutes.

Management of the trust supply chain. A protocol's security is bounded by the security of its weakest supplier — multisig provider, frontend host, oracle, bridge, RPC provider, npm dependency. The Bybit, LiteLLM, and Salesforce/Gainsight incidents of 2026 represent different realizations of the same structural vulnerability.


Conclusion

What the 2026 data on hacks in crypto actually shows is more nuanced: AI is not the systemic threat to crypto that many predicted. The major incidents of the year were slow, deliberate, and expert-led. AI, where present, played a secondary role.

At the same time, AI is becoming a meaningful element of defensive infrastructure, providing monitoring, audit, and response at a density previously unavailable to the industry. The long-term resilience of the DeFi sector will be determined by the capacity of protocols to use AI as a multiplier of expertise — on the side of defense to the same degree it is used on the side of attack.

The vulnerability of the crypto space is not a consequence of the emergence of AI. It is a consequence of the degree to which the industry is prepared to systematically address known problems in the architecture of trust.


This material reflects the position of Edge Capital and does not constitute investment advice.