Back to Blog

Crypto Risk in 2026: Why 75% of Web3 Exploits Happen Outside Smart Contract Audits

by Dmytro Zap
12m

Intro: How much we spent on PDFs?

Since 2020, DeFi alone has paid for nearly 10,000 smart contract audits. That is roughly half a billion dollars in security spend, enough to buy yourself a Fiorentina-sized football club instead of PDF reports. Over the same period, attackers have extracted at least $10 billion from Web3 protocols. Around 75% of that capital was drained in a way, no audit was scoped to check (Rekt, 2023 through 2025).

Just last week, Kelp DAO was exploited for $292 million. The protocol's bridge and rsETH contracts had been audited twice. A single-point-of-failure dependency drained its pools anyway.

The pattern has been visible for years. Once the industry began serious contract auditing, failures migrated. Attackers went where the walls were thin: people, policies, infrastructure, and dependencies. Funds now get stolen through social engineering, insider compromise, stale key rotation, and registrar-level domain takeovers. None of those show up in a Solidity review.

So, should we stop auditing and start certifying? That is the question this article takes apart.

A smart contract audit checks whether the vault where funds are stored is hacker-proof. But sometimes the key to the vault is sitting in an unlocked drawer that even an intern can open.

What a smart contract audit is

Let's start with what an audit actually is, because the industry has quietly let "we've been audited" become synonymous with "we're safe" — and those are not the same sentence.

A smart contract audit is a point-in-time code review. A firm takes your deployed or pre-deployment codebase, and works through it across two parallel tracks. The automated track runs code through static analysis tools like Slither or MythX, which feed thousands of malformed inputs to your functions to find edge cases. 

It also includes formal verification tools that mathematically prove whether specific properties hold under all possible states. The manual track is a human auditor reading the logic: checking that access control is correctly scoped, that integer arithmetic can't overflow or underflow in ways that move money incorrectly, and that reentrancy guards are in place where external calls happen. The order of operations in a function can't be exploited by a well-timed transaction from someone watching the mempool.

What auditors specifically hunting: 

State manipulation. Can an attacker force the contract into an unexpected state that unlocks funds or privileges? 

Logic flaws. Does the code do what the developer intended, and does what the developer intended actually hold up when an adversary is actively probing it? 

 

Dependency assumptions. If this contract calls an external contract, what happens if that external contract behaves maliciously or returns unexpected data? 

Flash loan attack surfaces. Can someone borrow a massive sum within a single transaction block, manipulate a price or a vote, extract value, and repay the loan before the block closes, leaving the protocol drained and the attacker with clean hands?

When they're done, the project team gets a report categorizing findings by severity: critical, high, medium, low, informational. Each entry describes the vulnerability, the affected code, and a recommended fix. Then the project fixes what it fixes, the firm reviews the fixes, and the report goes public. That's the whole product. It covers the code that existed at the moment they looked at it, in the state it was in, with the assumptions baked into it at that time.

If the surface looks big to you, you’re right, the cost of this is also big. But it gets smaller if we’d compare it to what operations audits of ISO, CCSS cover, financial audits, and regulatory readiness analysis. 

Now, three things happen in practice that make even that limited coverage smaller than it looks.

First, coverage is a budget conversation. Every function a security firm reviews costs money. More functions, bigger invoice. Sometimes projects consciously choose smaller coverage of essential functions to save money, sometimes they just get the cheapest option to check the box of having an audit that is required for further liquidity acquisition. Drift Protocol had audited smart contracts when it was exploited on April 1, 2026. But the governance function was generally unaudited. That gap enabled an insider attack that drained $285M. The code that didn't get reviewed was one of the keys to the door.

Second, code changes. Projects ship updates, patch bugs, add features. In that case, past audit remains in the past with its guarantees expired. If you update fast and audit everything, probably the project has unlimited runway and is committed to security. 

Third, and this one is almost funny in how often it happens: projects treat the audit as the finish line. They get the report, and publish the badge, to make a credential from an element of the security stack. We already mentioned that code exploits losing the race of victims to the failures of operations and dependacnies, but even for security, audit is just a part of a puzzle. Code must be strengthened further with bug bounty, monitoring, anomaly detection.

Where audits fall short

But here's the part that doesn't get said enough, and it's not the auditors' fault: a smart contract audit was never designed to cover most of the ways a protocol actually fails.

Read that again. Not "fails to cover them well." Literally not in scope.

An audit reviews on-chain logic. If projects holds 90% of the treasury in their own token, dependency concentration, or poor key management is basically outside the scope because it’s not code, not on-chain.

Yet, the Web3 industry has not yet adopted a shared standard for measuring off-chain exposure with measurable parameters.

Here's how off-chain surface maps against what an audit isn’t meant to cover:

What the audit doesn't touch

What it means in practice

Key management and CCSS postureWhether the team's operational security is run with the thought that there is a DPRK insider among employees or people who store seed phrases in Notion
Bridge validator concentrationWhether the protocol's lifeline to other chains has four validators or forty, and how many a single attacker needs to compromise
Treasury quality, revenue models, liquidity, and yield sustainabilityWill the digital asset issuer economy hold up under stress, or is it designed only for markets that go up? 
Jurisdiction and regulatory postureIf the project complies with US/EU regulations, what is the jurisdiction tier (mainland/offshore), and can it be forced to cease operations due to regulatory violations?
Reputation layerIs the team qualified enough to do what they do? Is there social fraud and metric gaming to obscure actual operations? Have the team faced a crisis before, and were the reasons remediated? 

 

Out of 1,400 projects, nearly 300 are exposed to off-chain risk while holding an audit

We pulled 293 projects from the dataset that meet two conditions: they have a smart contract audit on record, and their overall Probability of Loss sits above 50 (moderate-high-critical risk). Every one of them clears the surface-level check an investor or institution would run (check if the audit PDF is on the website). By the current market definition of "this has been checked," they qualify.

But it’s not rainbows and ponies on the other five domains: 

The average PoL across those 293 projects comes in at 69.5 on a scale of 0 to 100, where higher means more exposed. That lands the typical audited project firmly in moderate-to-elevated risk territory. Everything the audit wasn't scoped to check is what pushed them there.

Here's the data.

Audit coverage itself is the first surprise. 

Only 21% of these audited projects have comprehensive audit scope. 43% have partial scope, and 35% have minimal scope. The badge on the website looks the same across all three tiers, while we would not be equally confident in each tier. 

Two out of three audited projects have no active bug bounty program. 

67% of the fetched projects got the audit, published the report, and then set up no ongoing financial incentive for outside researchers to find what the auditors could have missed. 

42% show fully stalled GitHub activity. The audit covered a version of the code that is no longer maintained in any meaningful sense. In 30% of cases, both failures stack: no bug bounty, no development. A static audit report serves as a credential for a project that has effectively frozen.

Over half don't publish proper risk disclaimers. 

53% of these audited projects operate without clear, structured disclosure of the risks their product carries.

19% of audited projects sit in jurisdictions specifically chosen to minimize that accountability.

 

Stack these findings together, and the picture comes into focus: Across the 293 audited projects, 100% have at least one off-chain domain in weak territory. 

We’re talking about, well, active projects: they have CoinGecko, TVL, a healthy market, and user activity.  What they don't have is a solid risk posture across the scope that audits don’t cover. 

How "audited" projects get hacked: three April 2026 case studies

Let’s zoom into three recent incidents and map how they slipped through the gaps in risk posture.

Drift Protocol – $285M, April 1, 2026 (operational failure)

Drift had audited contracts. The exploit came through people and process.

  • Attackers posed as a quant trading firm, onboarded capital, joined working groups, and slowly built trust.
  • They used social engineering to get multisig signers to pre‑sign hidden authorizations.
  • A zero‑timelock Security Council migration removed the last blocker just days before the attack. When they finally pulled the trigger, execution and cash‑out took 12 minutes.
  • The scope of the smart contract audit was not meant to cover any of the steps attackers took. Classic operational risk

CoW Swap – ~$1.2M (operational + infrastructure failure)

CoW Swap’s contracts were audited. DNS layer made the attack possible. 

  • Attackers exploited weaknesses in the .fi domain registration/transfer process.
  • Using forged documents and social engineering, they briefly obtained control of DNS records.
  • They served a counterfeit interface at the legitimate URL, luring users to connect wallets and sign malicious transactions.
  • Verdict: off-chain and out of the audit scope attack.

Kelp DAO – ~$292M (dependency failure)

Kelp DAO’s bridge and rsETH contracts had been audited twice.

  • The attacker forged cross‑chain messages to trick LayerZero’s lzReceive function, draining ~116,500 rsETH (~18% of supply).
  • Because the bridge backed rsETH across 20+ networks, this one failure cascaded into doubts about rsETH backing everywhere, triggering freezes by protocols like Aave and SparkLend.
  • A vulnerability introduced during a routine upgrade slipped through two audits and lived in production for 21 days.
  • LayerZero pointed to Kelp’s configuration. Kelp pointed to the upgrade process. The root cause was dependency concentration and deployment process. 
  • “No audit” wasn’t among the reasons for the exploit.

 

Smart contract audits are great tools to mitigate risk, but they have their limitations, which include the attack surface. 

 

That’s where these three were caught off guard despite audits:

  • Drift: people and governance.
  • CoW: registrar and DNS.
  • Kelp: bridge configuration and upgrade pipeline.

Why the industry still relies just on audits?

The state of Web3 security is something like pre-World War II France. Builders built a Maginot Line of audits, and it protects the on-chain surface reasonably well. Bad actors, meanwhile, are taking the longer route around it: operations, governance, finance, and infrastructure. When incidents happen, projects blame dependency providers, DVNs, libraries, registrars. The conversation has not yet reached the point where anyone large is brave enough to say out loud that audits are now a baseline, and baselines don't cover the full scope Lazarus exploits projects across.

The next logical question is how to spot projects that are protected only by an audit and nothing else.

CORE3's risk benchmark tracks roughly 60 off-chain parameters out of a total of 85. The good news is that you can gather these manually. They include ISO 27001 and CCSS certification, GitHub activity, liquidity risks, token lock-up mechanisms, jurisdiction tier, treasury composition, and about fifty-five more. The better news is that you don't have to, because Probability of Loss does it for you.

The Probability of Loss index flags which projects carry the structural conditions attackers look for before anything happens: no monitoring, no bug bounty, stale key rotation, single-bridge dependencies, anonymous teams, opaque treasuries. PoL can't catch a BGP hijack in real time or stop a DNS takeover mid-flight, because, just like with audits, that's not its scope. What it does flag is whether a project has the monitoring and incident-response tooling in place to catch those attacks itself.

The reason a layer like this didn't exist until now is economic, not technical.

The tools are already there: monitoring platforms, forensic firms, and security researchers. What the market hasn't agreed on is which metrics to combine into a standardized risk index.

Without that baseline:

  • Insurers can't price off-chain risk
  • Institutions can't standardize due diligence across hundreds of assets
  • Exchanges can't defend listing decisions to regulators
  • Projects have no incentive to disclose their risk posture, because there is nothing to benchmark against

CORE3 built PoL to break that deadlock: a standardized, machine-readable Probability of Loss index that covers six risk domains and turns scattered signals into one comparable number per project.

The takeaway: Measure risk beyond smart contract audits

Audits are excellent tools. Web3 has been mistreating them as something they aren't. The overconfidence that comes with the badge has left the other five domains unprotected, which is why 75% of incidents over the last six years happened outside any smart contract audit's scope.

The bad news: the industry hasn't yet worked out the lesson. The debate is still at "how do we do audits better" when the question is "what else needs to exist next to them."

The good news: we already did the work. CORE3 extracted every parameter that enabled the major off-chain exploits of the last six years and built a risk index around them. The on-chain security layer is in there too.

The even better news: now that you know how the 75% gets out, you can look up your own project, see where the gaps are, and fix them before someone else finds them first. If your project isn't in the dataset yet, submit it for an initial assessment.

An audit tells you the vault is built correctly. PoL tells you whether anyone left the door to the server room open. You need both.