2025-12-08·18 min read

The complete smart contract security pipeline: first commit to mainnet

The Complete Smart Contract Security Pipeline: First Commit to Mainnet

By alex

Most security incidents don't happen because teams skip audits. They happen because security is treated as a checkpoint instead of a pipeline. You write code for three months, hire an auditor, fix a few things, deploy, and hope for the best.

That's not a pipeline. That's a prayer.

A real security pipeline starts at the first commit and doesn't stop after deployment. Here's the full picture — every phase, what it catches, and how to set it up.

Phase 0: development practices

Security starts before any testing tool runs. It starts with how you write code.

NatSpec and documentation

Every external and public function should have NatSpec comments that describe:

  • What the function does
  • What the expected behavior is (pre-conditions, post-conditions)
  • What shouldn't happen (the properties you'll test later)

This isn't busywork. When you write "this function should never allow withdrawal of more than the user's balance," you've just written your first invariant. The testing comes later — but the thinking happens now.

Access control from day one

Define your roles and permissions before you write business logic. Who can call what? Under which conditions? Document this in a role matrix. You'll formalize it into properties later, but the design comes first.

Small functions, clear state transitions

Functions that do one thing are easier to test, easier to reason about, and easier to audit. If a function handles deposits, interest accrual, and fee distribution all at once, it's going to be a nightmare to verify.

Phase 1: static analysis

Run static analysis on every commit. Zero excuses. It's free, fast, and catches low-hanging fruit.

Slither is the standard. Run it in CI:

slither . --config-file slither.config.json

What it catches:

  • Reentrancy patterns
  • Uninitialized variables
  • Shadowed state variables
  • Missing access control modifiers
  • Common anti-patterns

What it doesn't catch:

  • Business logic bugs
  • Incorrect math that doesn't trigger standard patterns
  • Bugs that require specific call sequences

Static analysis is your first filter. It won't find the hard bugs, but it'll stop you from shipping the embarrassing ones.

Phase 2: unit testing

I shouldn't have to say this, but: write unit tests. For everything. Aim for 100% line coverage as a baseline, but understand that coverage doesn't equal correctness.

Good unit tests cover:

  • Happy paths: Does the function work correctly with normal inputs?
  • Edge cases: Zero values, max values, empty arrays, first deposit, last withdrawal
  • Revert cases: Does it revert when it should? With the right error message?
  • Access control: Does every protected function reject unauthorized callers?
function test_deposit_updatesBalance() public {
    token.mint(alice, 1000e18);
    vm.startPrank(alice);
    token.approve(address(vault), 1000e18);
    vault.deposit(1000e18, alice);
    vm.stopPrank();

    assertEq(vault.balanceOf(alice), 1000e18);
    assertEq(token.balanceOf(address(vault)), 1000e18);
}

function test_deposit_revertsOnZeroAmount() public {
    vm.prank(alice);
    vm.expectRevert("ZERO_AMOUNT");
    vault.deposit(0, alice);
}

Unit tests are fast, deterministic, and great for catching regressions. But they only test the cases you think of. That's where the next phase comes in.

Phase 3: property-based testing and fuzzing

This is where you move from "testing what you think can go wrong" to "testing what you don't know can go wrong."

Write properties, not test cases

Instead of test_deposit_updatesBalance, you write properties that should hold for any input:

// For any deposit amount, the user's vault balance should increase
// by the correct number of shares
function invariant_depositAlwaysMintCorrectShares() public {
    // ... checked over thousands of random inputs
}

Start with system-level invariants

These are your most important properties, things that should always be true:

  • Total assets in the vault >= total supply of shares (solvency)
  • Sum of all individual balances == total supply
  • No user can withdraw more than they deposited (plus yield)
  • Protocol fees are always non-negative

Run with multiple tools

Don't rely on a single fuzzer. Each explores differently:

  • Echidna: Coverage-guided, excellent at finding deep bugs. Good for long campaigns.
  • Medusa: Coverage-guided with parallel execution. Good for large codebases.
  • Foundry: Quick iteration and CI integration. Good for development-time testing.

The Chimera framework lets you write properties once and run them with all three. You should do this.

For more on fuzzing and how it works, see What is Smart Contract Fuzzing. For tool comparisons, check Smart Contract Fuzzing Tools Compared.

Phase 4: formal verification

For your most critical functions, token minting, fee calculations, access control, add formal verification.

Halmos lets you write specs in Solidity, which lowers the barrier significantly:

function check_mintNeverExceedsCap(uint256 amount) public {
    vm.assume(amount > 0);
    vm.assume(token.totalSupply() + amount <= type(uint256).max);

    uint256 supplyBefore = token.totalSupply();
    token.mint(address(this), amount);

    assert(token.totalSupply() == supplyBefore + amount);
    assert(token.totalSupply() <= TOKEN_CAP);
}

FV proves this for every possible amount, not just the ones a fuzzer happens to generate. For arithmetic-heavy code, this is the only way to get real guarantees.

Don't try to formally verify your entire protocol. It's too expensive and most FV tools can't handle multi-contract interactions well. Pick the critical 20% of your code and verify that.

Phase 5: manual code review

Tools catch patterns. Humans catch design flaws.

Before engaging an external auditor, do an internal review:

  1. Architecture review. Does the design make sense? Are there simpler ways to achieve the same thing?
  2. Trust boundary analysis. Where does the protocol trust external inputs? Oracles, user inputs, admin actions, each is an attack surface.
  3. Upgrade and migration paths. If the protocol is upgradeable, what can go wrong during an upgrade?
  4. Economic review. Can the protocol be gamed? Are the incentives aligned? Flash loan attacks, sandwich attacks, oracle manipulation, think about economic exploits.

This is hard to automate. It requires experienced security engineers who've seen how real exploits work.

Phase 6: audit preparation

An audit is expensive. Don't waste it. Prepare properly:

Documentation package

  • Architecture overview with diagrams
  • Threat model (who can do what, what's the worst case)
  • Known issues and accepted risks
  • Deployment plan (constructor parameters, initialization sequence)
  • Access control matrix

Code Quality

  • All static analysis warnings addressed or documented
  • Full test suite passing
  • Invariant suite with meaningful properties
  • Code frozen, no changes during the audit

Scope Definition

  • Exactly which contracts are in scope
  • Which external dependencies are trusted vs untrusted
  • Which chains will this deploy on (different chains have different quirks)

Teams that prepare well get better audits. The auditor spends time finding real bugs instead of asking basic questions about how the protocol works.

Phase 7: the audit

During the audit, your job is to be responsive. Answer questions quickly. Provide test environments. If the auditor asks "can this function be called by anyone?" and you don't know, you've got a problem.

Good audit firms will:

  • Review architecture and design
  • Do line-by-line code review
  • Write custom tests for suspected bugs
  • Provide severity-rated findings with recommended fixes

Don't just accept fixes without understanding them. Every fix should be reviewed, tested, and ideally covered by a new invariant that would've caught the original bug.

Phase 8: post-audit testing

Here's where most teams drop the ball. The audit found 5 medium-severity issues. You fix them. Do you re-test everything?

Yes. Every fix is new code. New code can have new bugs.

Fix Verification

  • Every fix gets a unit test that verifies the fix
  • Every fix gets an invariant that would've caught the original bug
  • Run the full fuzzing campaign again after all fixes are applied
  • If the fixes are significant, consider a fix review from the auditors

Regression Suite

After the audit, your test suite should include:

  • All original tests
  • New tests for every finding
  • New invariants for every property the auditor identified
  • Integration tests for any multi-contract interactions the auditor flagged

This regression suite is your ongoing defense. Run it on every commit going forward.

Phase 9: deployment

Deployment itself has security concerns:

Deployment Scripts

  • Use deterministic deployment (CREATE2) where possible
  • Verify constructor parameters are correct
  • Double-check initialization parameters (many exploits come from incorrect initialization)
  • Use a deployment checklist, don't rely on memory

Post-Deployment Verification

  • Verify all contracts on Etherscan/Sourcify
  • Check that all roles are assigned correctly on-chain
  • Verify that initialization state matches expectations
  • Run a smoke test against the deployed contracts

Deployment Keys

  • Use a multisig for deployment, not a hot wallet
  • Revoke deployer permissions immediately after deployment
  • Transfer ownership to the governance multisig

Phase 10: monitoring and incident response

Your security pipeline doesn't end at deployment. The protocol is live and attackers are looking at it right now.

On-Chain Monitoring

  • Monitor for unusual transactions (large flash loans, abnormal swap volumes)
  • Track key invariants on-chain (total supply, key balances, oracle prices)
  • Set up alerts for admin function calls
  • Monitor mempool for potential attacks (if applicable)

Incident Response Plan

Before you deploy, have a plan for when things go wrong:

  • Who can pause the protocol? How fast can they act?
  • What's the communication plan? Discord, Twitter, on-chain message?
  • Do you have a war room process? Who's in the room, what tools do they need?
  • Is there a bug bounty? If not, set one up. White hats need a reason to report instead of exploit.

Continuous Testing

The protocol evolves. Governance changes parameters. Markets shift. New integrations get added.

Keep running your fuzzing campaigns. Update your invariants when the protocol changes. Treat security as ongoing, not one-and-done.

The full pipeline at a glance

PhaseWhatWhenCatches
0. Dev PracticesNatSpec, architectureEvery commitDesign flaws early
1. Static AnalysisSlitherEvery PRCommon patterns
2. Unit TestsFoundry/HardhatEvery PRKnown edge cases
3. FuzzingEchidna/Medusa/FoundryDaily/WeeklyUnknown edge cases
4. Formal VerificationHalmos/CertoraPre-auditMathematical correctness
5. Manual ReviewInternal teamPre-auditDesign/logic flaws
6. Audit PrepDocumentationBefore auditN/A (saves audit time)
7. External AuditAudit firmBefore deployEverything above, plus fresh eyes
8. Post-AuditFix testingAfter auditRegression bugs
9. DeploymentScripts + verificationDeploy dayDeployment errors
10. MonitoringOn-chain + off-chainForeverLive exploits

Skip any of these steps and you're leaving gaps. How big those gaps are depends on how much value your protocol holds.

Getting started

You don't need all of this on day one. Start with static analysis and unit tests (you should already have these). Add invariant testing next, it's the highest-ROI addition to most test suites. Then layer in formal verification for your critical functions.

And when you're ready for an audit, make sure your test suite reflects the work you've done. Auditors who see a solid invariant suite know you're serious, and they'll spend their time finding the bugs your tools can't.

Get a Security Review


alex leads security strategy at Recon. This pipeline is the distilled version of what we run on every engagement, adapted for teams that want to own their own security.

Related Posts

Related Glossary Terms

Build a complete security pipeline