2026-04-06·12 min read

How to prepare your code for a smart contract audit

How to Prepare Your Code for a Smart Contract Audit

By kn0t — April 2026

I've seen teams show up to an audit engagement with nothing but a GitHub link and a "good luck." I've also seen teams hand over a perfectly organized package that lets auditors hit the ground running on day one.

Guess which teams get better results?

Preparation isn't busywork. Every hour your auditor spends figuring out how your system works is an hour they're not spending finding bugs. If you're paying for a 3-week engagement and the auditor spends the first week just understanding your architecture, you effectively bought a 2-week audit.

Here's how to make sure that doesn't happen.

The pre-audit checklist

Let's go through everything you should have ready before engaging an auditor. I'll explain why each item matters, not just what it is.

1. Code freeze

What: A specific commit hash that won't change during the audit.

Why: If auditors are reviewing moving targets, they waste time re-checking code that changed. Worse, they might miss a bug introduced mid-audit because they already reviewed that file.

How:

  • Pick a date at least 1 week before the audit starts
  • Freeze the branch. No merges, no "quick fixes," no "just one more feature"
  • Give the auditor the exact commit hash
  • If you must change something, keep a running changelog and flag it explicitly

This is the single most impactful thing you can do. I can't stress it enough.

2. Documentation

Architecture overview. A document (even a one-pager) explaining:

  • What your protocol does in plain English
  • How the major contracts interact
  • The flow of funds through the system
  • Trust assumptions (who can do what, what's upgradeable, what's permissionless)
  • External dependencies (oracles, other protocols, off-chain components)

NatSpec comments. Every public and external function should have NatSpec documentation:

/// @notice Deposits collateral and mints debt tokens
/// @dev Caller must have approved this contract for \`amount\`
///      Reverts if collateral ratio would drop below MIN_RATIO
/// @param collateralToken The ERC-20 token to deposit as collateral
/// @param amount The amount of collateral tokens to deposit
/// @param debtAmount The amount of debt tokens to mint
/// @return debtTokenId The ID of the minted debt position
function depositAndBorrow(
    address collateralToken,
    uint256 amount,
    uint256 debtAmount
) external returns (uint256 debtTokenId) {
    // ...
}

Known issues list. This is something most teams skip and it's one of the most useful things you can provide. If you already know about a rounding issue in your fee calculation that you've decided is acceptable, tell the auditor. Otherwise they'll spend time writing up something you're already aware of.

Format it like this:

## Known Issues

1. **Fee rounding in \`calculateFees()\`** -- Rounds down, may result in
   0-1 wei loss per transaction. Accepted risk -- gas cost to exploit
   exceeds potential gain.

2. **First depositor inflation attack** -- Mitigated by initial deposit
   in constructor. See deploy script line 45.

3. **Centralization risk in \`setOracle()\`** -- Owner can change oracle.
   Planned migration to governance in v2.

3. Test suite

Your tests are documentation that runs. They show the auditor how you intend the system to work.

Unit tests for every function. At minimum, every public function should have tests covering:

  • Happy path
  • Edge cases (zero amounts, max uint, empty arrays)
  • Revert conditions
  • Access control

Integration tests. Tests that exercise multi-step workflows:

  • Full deposit → borrow → repay → withdraw cycle
  • Liquidation flow end to end
  • Governance proposal → vote → execute

Run your tests and make sure they pass. You'd be surprised how many teams hand off code with failing tests. It immediately erodes confidence.

# Run everything and confirm green
forge test -vv

# Check coverage
forge coverage --report summary

Coverage report. Show the auditor what's tested and what isn't. Low coverage areas are where they should look hardest.

If your team has started writing invariant tests, include those too. They're extremely useful for auditors because they express what properties the system should maintain, not just what individual functions do.

4. Access control documentation

Create a clear matrix showing:

  • Every privileged role (owner, admin, operator, guardian, etc.)
  • What each role can do
  • Which functions each role can call
  • Whether roles can be transferred or revoked
  • Timelock delays on sensitive operations
| Role     | Can Call              | Timelock | Transferable |
|----------|-----------------------|----------|--------------|
| Owner    | setFeeRate, pause     | 48h      | Yes (2-of-3) |
| Guardian | pause, unpause        | None     | Yes          |
| Operator | rebalance, harvest    | None     | No           |
| Anyone   | deposit, withdraw     | N/A      | N/A          |

This table takes 15 minutes to make and saves your auditor hours of digging through modifier chains.

5. Deployment information

  • Target chain(s): Ethereum, Arbitrum, both? Cross-chain?
  • Deployment scripts: Include the actual scripts, not just descriptions
  • Constructor parameters: What values will be used in production?
  • Proxy pattern: If upgradeable, which pattern? UUPS? Transparent? Diamond?
  • Existing deployments: If this is an upgrade, link to the deployed contracts

Auditors need this because deployment configuration can introduce bugs that don't exist in the test environment. A different constructor parameter, a different proxy admin setup, a different chain with different precompiles. All potential attack surface.

6. Previous audit reports

If you've been audited before, share the reports. All of them. Even if they're from a different firm, even if they found embarrassing bugs.

Auditors use previous reports to:

  • Understand the history of the codebase
  • Check if previous findings were properly fixed
  • Identify areas that have been problematic before
  • Avoid duplicating work on already-reviewed code

7. Scope definition

Be explicit about what's in scope and what isn't.

## In Scope
- src/core/Vault.sol
- src/core/Strategy.sol
- src/core/Oracle.sol
- src/periphery/Router.sol
- deploy/Deploy.s.sol

## Out of Scope
- src/mocks/ (test helpers only)
- src/legacy/ (deprecated, not deployed)
- Third-party dependencies (OpenZeppelin, Solmate)
- Off-chain keeper bot logic

Also specify:

  • Lines of code (nSLOC) for the in-scope contracts
  • Solidity version
  • EVM version target
  • Compiler settings (optimizer runs, via-ir)

8. Communication setup

Decide upfront:

  • Communication channel: Private Telegram group? Discord? Slack?
  • Response time expectation: Will a dev be available within a few hours to answer questions?
  • Point of contact: Who should the auditors talk to? One person or the whole team?

The best audit engagements have a dev available to answer questions quickly. When an auditor says "hey, is this function supposed to handle the case where X is zero?" and gets an answer in 30 minutes instead of 3 days, the quality of the entire engagement goes up.

The preparation checklist

Here's the condensed version you can copy and work through:

## Pre-Audit Preparation Checklist

### Code
- [ ] Code freeze date set: ___________
- [ ] Frozen commit hash: ___________
- [ ] All tests passing
- [ ] Coverage report generated
- [ ] No compiler warnings
- [ ] Linter clean (forge fmt, solhint)

### Documentation
- [ ] Architecture overview document
- [ ] NatSpec on all public/external functions
- [ ] Known issues list
- [ ] Access control matrix
- [ ] Deployment parameters documented
- [ ] System diagram (contract interactions)

### Testing
- [ ] Unit tests for every public function
- [ ] Integration tests for key workflows
- [ ] Edge case tests (zero, max, empty)
- [ ] Revert condition tests
- [ ] Coverage above 85%

### Infrastructure
- [ ] Scope definition (in/out of scope)
- [ ] Previous audit reports shared
- [ ] Communication channel set up
- [ ] Dev point of contact assigned
- [ ] Response time SLA agreed

### Deployment
- [ ] Target chain(s) specified
- [ ] Deployment scripts included
- [ ] Constructor parameters documented
- [ ] Proxy pattern documented (if applicable)
- [ ] Existing deployments linked (if upgrade)

How preparation affects cost and quality

Let's talk money. Good preparation doesn't just make the audit better. It makes it cheaper.

Without preparation:

  • Auditors spend 25-40% of the engagement just understanding the system
  • They ask questions that go unanswered for days, blocking their work
  • They write up "findings" that are actually known issues or intended behavior
  • The final report has noise that obscures real bugs
  • You might need a follow-up engagement because they ran out of time

With preparation:

  • Auditors start finding bugs on day 2 instead of day 5
  • Questions get answered fast, keeping momentum
  • Known issues are excluded from the report, making it cleaner
  • More time spent on deep analysis means more real findings
  • Less likely to need a costly extension

For more on what the audit process should look like, check out what to expect from a smart contract audit. And if you're still deciding whether an audit makes sense for your project, we've laid out the considerations in do you need a smart contract audit?.

Bonus: what auditors wish you knew

I've talked to dozens of auditors. Here's what comes up over and over.

"Don't change the code during the audit." Seriously. Every time you merge a fix for something unrelated, the auditor has to re-check interactions. If they've already built a mental model of how function A calls function B, and you refactor function B mid-audit, that mental model is gone.

"Write better error messages." Custom errors with descriptive names help auditors understand intent:

// Bad -- what does this check actually protect?
require(amount > 0);

// Good -- auditor immediately understands the business rule
error DepositAmountMustBeNonZero();
if (amount == 0) revert DepositAmountMustBeNonZero();

"Tell us about your economic model." The code shows how but not why. If your fee model is designed to prevent economic attacks, explain the attack and the defense. The auditor can then verify the defense actually works.

"Tell us what scares you." If there's a function you're nervous about, say so. Auditors are more effective when they know where the risk is concentrated. Nobody will judge you for being honest about uncertainty.

"Include your invariants." Even if they're informal. "The total supply of our token should always equal the sum of all balances" is incredibly useful context. If you've written formal invariant tests, those are gold.

The ROI of preparation

I know this seems like a lot of work. Here's the payoff:

A well-prepared audit engagement typically finds 2-3x more real bugs than a poorly prepared one. Not because the auditors are better, but because they spend their time actually auditing instead of reverse-engineering your system.

The preparation work also has value beyond the audit:

  • Your documentation helps onboard new team members
  • Your test suite catches regressions going forward
  • Your known issues list becomes institutional knowledge
  • Your access control matrix feeds into your incident response plan

It's an investment that pays off in multiple ways.

For a deeper understanding of smart contract audits and what makes them effective, check out our learning resources.

Get started

Start with the checklist above. Work through it item by item. If you get stuck on the testing section, we've got guides on fuzzing and invariant testing that can help.

And when you're ready to engage, having all of this ready means you'll get a better audit at a better price. That's a win for everyone.

Request an audit — we'll tell you if you're ready

Related Posts

Related Glossary Terms

Ready for an audit? Let's talk