2026-03-14·12 min read

ZK circuit security: constraint bugs, witness leaks, and audit patterns

ZK circuit security: constraint bugs, witness leaks, and audit patterns

If you've audited smart contracts for a while and then look at a ZK circuit for the first time, something feels off. The code looks familiar — there are variables, assignments, conditionals. But it doesn't execute. It constrains. And that distinction changes everything about how bugs manifest, how you find them, and how you prove they're real.

We've been auditing ZK circuits at Recon across Circom, Noir, and Halo2 projects, and the bug classes we see are fundamentally different from what shows up in Solidity. This post breaks down why, walks through the most common vulnerability patterns, and lays out the audit methodology we use.

Constraint systems aren't execution environments

A smart contract is imperative: do this, then do that, store this value, send those tokens. When you audit one, you're tracing execution paths and looking for states where invariants break.

A ZK circuit is declarative: it defines a set of polynomial constraints that a valid witness must satisfy. The "program" doesn't run in the traditional sense. Instead, a prover constructs a witness (the private inputs and intermediate values) and proves that every constraint holds. A verifier checks the proof without seeing the witness.

This means the bug model is inverted. In a smart contract, you worry about what the code does. In a circuit, you worry about what the code fails to constrain. If a constraint is missing, the prover can satisfy the circuit with values that shouldn't be valid — and the verifier will accept the proof anyway.

Common bug classes

Under-constrained circuits

This is the single most dangerous class of ZK bugs. An under-constrained circuit allows a malicious prover to generate a valid proof for an invalid statement. The circuit "works" for honest inputs but doesn't reject dishonest ones.

Here's a simple Circom example. Say you want a circuit that proves you know two factors of a number:

// VULNERABLE: under-constrained factoring circuit
template Factor() {
    signal input n;
    signal input a;
    signal input b;

    // Constrain that a * b == n
    signal product;
    product <== a * b;
    n === product;
}

This looks right at first glance. But there's nothing stopping a or b from being 1 or n itself. A prover can always satisfy this with the trivial factorization (1, n). Worse, in finite field arithmetic, there's no implicit range — a and b could be negative field elements that multiply to n modulo the field prime.

The fixed version adds the missing constraints:

// FIXED: properly constrained factoring circuit
template Factor() {
    signal input n;
    signal input a;
    signal input b;

    // Constrain that a * b == n
    signal product;
    product <== a * b;
    n === product;

    // Prevent trivial factorizations
    // a and b must be > 1 and < n
    // Use range check components
    component aRangeCheck = GreaterThan(252);
    aRangeCheck.in[0] <== a;
    aRangeCheck.in[1] <== 1;
    aRangeCheck.out === 1;

    component bRangeCheck = GreaterThan(252);
    bRangeCheck.in[0] <== b;
    bRangeCheck.in[1] <== 1;
    bRangeCheck.out === 1;

    component aNCheck = LessThan(252);
    aNCheck.in[0] <== a;
    aNCheck.in[1] <== n;
    aNCheck.out === 1;

    component bNCheck = LessThan(252);
    bNCheck.in[0] <== b;
    bNCheck.in[1] <== n;
    bNCheck.out === 1;
}

Every signal that isn't fully determined by other constraints needs explicit bounds. If you leave a degree of freedom in the witness, someone will exploit it.

Witness grinding and malleability

Even when a circuit is fully constrained for honest inputs, there can be multiple valid witnesses for the same public input. This is witness malleability, and it's a problem when the protocol assumes proof uniqueness.

Witness grinding is the active exploitation of this: a malicious prover tries many witnesses until they find one that satisfies additional external conditions (like a favorable hash or a replay in a different context). It doesn't break soundness per se, but it breaks protocol-level assumptions that depend on proofs being non-malleable.

Missing range checks

In finite field arithmetic, there's no concept of "overflow." A value that looks like 5 could also be p - 5 (where p is the field prime), and both satisfy the same multiplicative constraints. If your circuit assumes values are within a certain range (e.g., a uint64), you need explicit range check constraints. We see this one constantly — especially in circuits ported from regular code where the developer assumed integer semantics.

Trusted setup issues

For circuits using KZG or Groth16, the trusted setup ceremony produces a structured reference string (SRS). If the toxic waste from the ceremony isn't properly destroyed, anyone with it can forge proofs. We don't audit the ceremony itself (that's a different problem), but we do verify that the circuit actually uses the SRS correctly and that verification contracts on-chain check all the pairing equations.

<a href="/request-audit" class="cta-button">Get your ZK circuits audited</a>

Soundness vs completeness — both matter

Soundness means: a malicious prover can't convince the verifier of a false statement. If your circuit is unsound, fake proofs pass verification. This is the catastrophic case.

Completeness means: an honest prover can always generate a valid proof for a true statement. If your circuit is incomplete, legitimate users can't prove things they should be able to prove. This blocks functionality — it's a liveness bug, not a safety bug, but it still matters.

In practice, we've seen completeness bugs that are just as impactful as soundness bugs. A circuit that rejects valid withdrawals effectively freezes user funds. Both directions need testing.

Audit methodology: Circom vs Noir vs Halo2

The constraint model is the same across frameworks, but the tooling and bug patterns differ enough that the audit approach has to adapt.

Circom: Template-based, explicit signal flow. The main risk is the gap between the <-- operator (assignment only, no constraint) and <== (assignment + constraint). Every <-- is a red flag that needs manual verification of a corresponding === constraint. We grep for every <-- and verify the matching constraint exists.

Noir: Higher-level, closer to Rust syntax. The compiler handles more of the constraint generation, which means fewer raw under-constrained bugs but more risk of the developer misunderstanding what the compiler does and doesn't constrain. assert() in Noir generates constraints; regular variable assignments don't always. The audit focuses on ensuring every security-relevant assertion is actually in the constraint system, not just in the prover's logic.

Halo2: Rust-based, column/region model. The configuration phase (where you define gates and columns) is separate from the synthesis phase (where you assign values). Bugs here often involve mismatches between what the gates constrain and what the synthesis assumes. The region abstraction can also mask whether a cell is actually constrained or just assigned.

Testing strategies that actually work

Formal verification of constraints

For small-to-medium circuits, you can extract the constraint system as a set of polynomial equations and verify properties symbolically. Tools like Ecne (for Circom) and Alucard can check whether a circuit is under-constrained by analyzing the degree-of-freedom in the constraint system. This is the gold standard but doesn't scale to very large circuits.

Differential testing

Build a reference implementation in a normal programming language (TypeScript, Python) that computes the expected output for given inputs. Then run the same inputs through the circuit's witness generator and compare. Any divergence is a bug — either in the reference implementation or the circuit.

Property-based testing of circuit I/O

This is where we get the most mileage. Define properties over the circuit's public inputs and outputs and fuzz the private inputs:

// Property-based test: circuit output must match
// reference implementation for all valid inputs
import { expect } from "chai";
import * as fc from "fast-check";
import { wasm } from "./circuit_wasm";
import { referenceCompute } from "./reference";

describe("Circuit I/O correctness", () => {
    it("should match reference for arbitrary valid inputs", () => {
        fc.assert(
            fc.property(
                fc.bigInt(2n, 2n ** 64n - 1n),
                fc.bigInt(2n, 2n ** 64n - 1n),
                (a, b) => {
                    const expected = referenceCompute(a, b);
                    const witness = wasm.calculateWitness({
                        a: a.toString(),
                        b: b.toString(),
                    });
                    const circuitOutput = witness[1]; // first output signal
                    expect(circuitOutput).to.equal(expected.toString());
                }
            ),
            { numRuns: 10000 }
        );
    });

    it("should reject invalid witnesses", () => {
        fc.assert(
            fc.property(
                fc.bigInt(2n, 2n ** 64n - 1n),
                fc.bigInt(2n, 2n ** 64n - 1n),
                (a, b) => {
                    // Tamper with the witness and verify proof fails
                    const witness = wasm.calculateWitness({
                        a: a.toString(),
                        b: b.toString(),
                    });
                    witness[2] = (BigInt(witness[2]) + 1n).toString();
                    expect(() => wasm.checkConstraints(witness)).to.throw();
                }
            ),
            { numRuns: 5000 }
        );
    });
});

The key insight: you're not just testing that correct inputs produce correct outputs. You're testing that incorrect witnesses fail constraint checking. Both directions matter — it mirrors the soundness/completeness distinction.

For more on how property-based testing applies to smart contracts generally, see our guide on fuzzing methodology.

Practical checklist for ZK security review

Before you ship a ZK circuit to production, walk through this list:

  • Constraint completeness: every <-- (Circom) or unconstrained assignment has a matching constraint. No signal is left free.
  • Range checks: every signal that represents a bounded value (amounts, indices, booleans) has explicit range constraints. Don't rely on field arithmetic behaving like integer arithmetic.
  • Nullifier uniqueness: if the circuit produces nullifiers (for privacy protocols), verify they're deterministically derived from private inputs with no malleability.
  • Public input validation: the verifier contract checks all public inputs against expected bounds before accepting the proof.
  • Witness uniqueness: for each set of public inputs, determine if multiple valid witnesses exist. If they do, assess whether the protocol is safe under witness malleability.
  • Differential testing: a reference implementation exists and has been tested against the circuit across thousands of random inputs.
  • Soundness testing: tampered witnesses are rejected by the constraint system. Proofs generated from tampered witnesses are rejected by the verifier.
  • Verifier contract correctness: the on-chain verifier matches the circuit's verification key and checks all pairing equations (for Groth16/KZG) or hash chains (for FRI-based systems).
  • Trusted setup verification: if applicable, the SRS was generated in a ceremony with sufficient participants, and the circuit uses the correct SRS.
  • Upgrade path: if the circuit or verification key can be upgraded, the governance mechanism is sound and timelocked.

ZK circuits are powerful, but they're unforgiving. A single missing constraint can invalidate the entire security model in ways that are invisible until exploited. The testing and audit methodology has to match that reality.

Related Posts

Related Glossary Terms

Get your ZK circuits audited