Let's start with what an audit actually is, because the industry has quietly let "we've been audited" become synonymous with "we're safe" — and those are not the same sentence.
A smart contract audit is a point-in-time code review. A firm takes your deployed or pre-deployment codebase, and works through it across two parallel tracks. The automated track runs code through static analysis tools like Slither or MythX, which feed thousands of malformed inputs to your functions to find edge cases.
It also includes formal verification tools that mathematically prove whether specific properties hold under all possible states. The manual track is a human auditor reading the logic: checking that access control is correctly scoped, that integer arithmetic can't overflow or underflow in ways that move money incorrectly, and that reentrancy guards are in place where external calls happen. The order of operations in a function can't be exploited by a well-timed transaction from someone watching the mempool.
What auditors specifically hunting:
State manipulation. Can an attacker force the contract into an unexpected state that unlocks funds or privileges?
Logic flaws. Does the code do what the developer intended, and does what the developer intended actually hold up when an adversary is actively probing it?
Dependency assumptions. If this contract calls an external contract, what happens if that external contract behaves maliciously or returns unexpected data?
Flash loan attack surfaces. Can someone borrow a massive sum within a single transaction block, manipulate a price or a vote, extract value, and repay the loan before the block closes, leaving the protocol drained and the attacker with clean hands?
When they're done, the project team gets a report categorizing findings by severity: critical, high, medium, low, informational. Each entry describes the vulnerability, the affected code, and a recommended fix. Then the project fixes what it fixes, the firm reviews the fixes, and the report goes public. That's the whole product. It covers the code that existed at the moment they looked at it, in the state it was in, with the assumptions baked into it at that time.
If the surface looks big to you, you’re right, the cost of this is also big. But it gets smaller if we’d compare it to what operations audits of ISO, CCSS cover, financial audits, and regulatory readiness analysis.
Now, three things happen in practice that make even that limited coverage smaller than it looks.
First, coverage is a budget conversation. Every function a security firm reviews costs money. More functions, bigger invoice. Sometimes projects consciously choose smaller coverage of essential functions to save money, sometimes they just get the cheapest option to check the box of having an audit that is required for further liquidity acquisition. Drift Protocol had audited smart contracts when it was exploited on April 1, 2026. But the governance function was generally unaudited. That gap enabled an insider attack that drained $285M. The code that didn't get reviewed was one of the keys to the door.
Second, code changes. Projects ship updates, patch bugs, add features. In that case, past audit remains in the past with its guarantees expired. If you update fast and audit everything, probably the project has unlimited runway and is committed to security.
Third, and this one is almost funny in how often it happens: projects treat the audit as the finish line. They get the report, and publish the badge, to make a credential from an element of the security stack. We already mentioned that code exploits losing the race of victims to the failures of operations and dependacnies, but even for security, audit is just a part of a puzzle. Code must be strengthened further with bug bounty, monitoring, anomaly detection.