Welcome, fellow zk-enjoyer.
If you’ve been following along, you know how polynomials work as a data structure and why roots of unity make evaluating them fast. Now comes the question everyone skips: before the polynomial checks anything, where does the data come from?
This is post #3 in the “first principles” ZK series:
- Polynomials and finite fields
- Roots of unity
- Execution traces ← you are here
- FFT/NTT interpolation
- Constraints
- Polynomial packing
- KZG commitment
- Proof and verification
You already know how to do this
Here’s a function you’ve written a hundred times:
fn is_valid_age(x: u32) -> bool {
x >= 18 && x <= 90
}
The CPU runs it, checks the condition, returns true or false. The result is immediate. You trust the machine.
This isn’t hypothetical. Poland’s digital ID (e-dowód) on mobile phones could generate exactly this kind of proof: a hotel checks you’re 18+ from a QR code without seeing your birth date. The circuit behind it is a range check like the one we’re building here. (More on real-world ZK use cases in the Kraków meetup Q&A.)
Now imagine you need to prove to someone that this returned true for a specific value of x, without revealing x. The verifier can’t run your code on your private inputs. They get a proof, a small piece of data, and they verify it mathematically.
To produce that proof, you need an execution trace.
What is an execution trace?
An execution trace is a table. Each row captures a value that was relevant to your computation. You fill it in while running the program, before any proving happens.
For is_valid_age(45), the two conditions x >= 18 and x <= 90 get rewritten as subtractions: x - 18 (should be non-negative if x >= 18) and 90 - x (should be non-negative if x <= 90). This rewriting is part of translating code into polynomials. It’s usually the hardest part of writing circuits, because the equations have to catch every corner case and enforce the same conditions as the original code. More on how constraints work in later posts. For now, these are the values we track:
| step | value |
|---|---|
| x | 45 |
| x – 18 | 27 |
| 90 – x | 45 |
That’s it. You’re writing down every intermediate value so a polynomial can encode them later. The polynomial doesn’t execute your logic. It encodes this table, and constraints check that the numbers are consistent.
OK, so we rewrote the comparison from the if as subtractions. But we still need to check that the result is non-negative, which looks like the same kind of check we started with. It is, almost, but not quite.
Why not just compare directly?
Here’s where it gets interesting. And where your intuition will fight you. In finite fields, there are no negative numbers. Everything wraps around. If this section feels weird, that’s normal. Read slowly.
Think of a car odometer that only goes to 999,999. Drive one more mile past that and it shows 000,000, not 1,000,000. Try to go backwards from 000,000 and you get 999,999, not -1.
ZK proofs operate over finite fields, which work the same way but with a much bigger odometer. Every number is in the range [0, p-1] and arithmetic wraps around. There is no concept of “greater than” or “less than” in the strict sense.
This means x - 18 >= 0 is meaningless. If x = 5, then x - 18 doesn’t equal -13. It equals p - 13, which is a large positive-looking number:
Click Run. Both results look nonnegative, the field can’t tell them apart by sign.
So at this point it looks like rewriting comparisons as subtractions and checking for non-negative results was completely pointless. In a finite field every result looks non-negative. Bear with me, we’re getting there. (If you want more on why circuits only speak addition and multiplication, see Q4 in the meetup Q&A.)

Do you even range-check?
You use bit decomposition.
Wait what???
Yes, we’re counting bits. Like it’s 1975. Welcome to cryptography.
You take the result of the subtraction and break it down into bits. If it fits in k bits, you know it’s a small number. If it doesn’t fit, it’s huge (because the finite field wrapped it around, and those wrapped numbers are huuuge), and the check fails. That’s it. 8 bits can hold 0 to 255, so if x – 18 fits in 8 bits, x is somewhere between 18 and 273.
One check isn’t enough. You need two: one for the lower bound (x - 18 fits in k bits, so x >= 18) and one for the upper bound (90 - x fits in k bits, so x <= 90). Both together prove x is between 18 and 90. That’s why the trace ends up with two bit decompositions, one per bound.
The gap between 18 and 90 is 72, so 8 bits is more than enough (8 bits can hold up to 255). But each check alone has a range that is too wide:
- col_lower alone accepts x from 18 up to 273 (because x – 18 just needs to fit in 8 bits)
- col_upper alone accepts x from -165 up to 90 (because 90 – x just needs to fit in 8 bits)
- both together accept only x from 18 to 90
Try x = 91 (one above the upper bound). col_lower: 91 – 18 = 73, fits in 8 bits, passes. col_upper: 90 – 91 = p – 1, a ~254-bit number, does not fit in 8 bits, fails. The lower check alone would have let x = 91 through. You need both.
Try x = 17 (one below the lower bound). col_upper: 90 – 17 = 73, fits, passes. col_lower: 17 – 18 = p – 1, does not fit in 8 bits, fails.
Click Run. For x = 91, the lower check gives 73 (small, fits in 8 bits), but the upper check wraps around to a ~254-bit number. For x = 17, it’s the other way around. Those giant numbers will never fit in 8 bits.
Here’s why that’s a hard constraint, not just a flag. To prove a k-bit decomposition, the prover has to provide k individual bits, each either 0 or 1, that add up to the original value when weighted by powers of two. For 27 that looks like:
1·1 + 1·2 + 0·4 + 1·8 + 1·16 + 0·32 + 0·64 + 0·128 = 27
Wait, wait, stop. Do you see what this is? Do you see it?

The most you can reach with 8 bits is 1+2+4+8+16+32+64+128 = 255. If the value is 256 or more, no combination of 0s and 1s adds up to it. So we effectively have a non-negative check that works over a finite field, in the form of a polynomial: if the number is genuinely small (a valid subtraction result), it can be represented as the polynomial above. If the field wrapped it around into a giant number, it can’t.
There’s one more thing needed: a constraint that forces each bit to actually be 0 or 1, not some other field element. More on that in the constraints post.
The full trace
Now you can see why the trace looks the way it does. For is_valid_age(45), the trace has three columns. col_x holds the secret value. col_lower and col_upper hold the subtraction results and their bit decompositions. Without col_x in the trace, there would be no way to prove that col_lower[0] actually equals x - 18 and not some arbitrary number.
| row | col_x | col_lower (x-18) | col_upper (90-x) | |
|---|---|---|---|---|
| 0 | 45 | 27 | 45 | ← values |
| 1 | 0 | 1 | 1 | ← b0 |
| 2 | 0 | 1 | 0 | ← b1 |
| 3 | 0 | 0 | 1 | ← b2 |
| 4 | 0 | 1 | 1 | ← b3 |
| 5 | 0 | 1 | 0 | ← b4 |
| 6 | 0 | 0 | 1 | ← b5 |
| 7 | 0 | 0 | 0 | ← b6 |
| 8 | 0 | 0 | 0 | ← b7 |
Row 0 holds the subtraction results. Rows 1 through 8 hold the individual bits. That’s the whole structure.
You didn’t write if. You wrote down what had to be true for the if to pass, for specific case of secret value x.
In arkworks: the trace as field elements
Throughout this series we build PLONK by hand using ark-ff, just field arithmetic. Each cell in the trace is a field element.
The snippet below computes the subtraction results and breaks them into bits, the same bit decomposition from the previous section, but now in arkworks code. The key line is (n >> i) & 1 inside to_bits_le: it extracts bit i from the number. Everything else is plumbing.
Now we assemble the trace. Each column is a vector: the value in position 0, followed by 8 bits. That’s the whole trace.
Click Run. The output matches the trace table: row 0 holds subtraction related values, rows 1-8 hold bit decompositions.
What the trace is for
“OK I built this trace, but what’s it for?”
Glad you asked.
A common confusion: people think proving IS executing. It’s not. Three separate steps, three different roles:
| Step | Who | What | Knows everything? |
|---|---|---|---|
| 1. Execute | Prover | Runs program, fills the trace | Yes, all inputs/outputs/steps |
| 2. Prove | Prover | Turns completed trace into a proof | Yes, but hides the secrets |
| 3. Verify | Verifier | Checks the proof | No, sees only proof + public inputs |
The constraints don’t walk through the trace row by row, discovering what happens next. The full trace is already there. They just check that all the numbers are consistent.

One thing to take away from this post: the prover doesn’t prove by re-running your code. The prover fills in a table, and the math checks that the table is consistent. Execution first, proof second. Always.
Next up
Post #4, FFT/NTT interpolation: how to turn this table into polynomials. And do some cool stuff with it.
Coming soon.
The FFT/NTT algorithm is covered in Roots of Unity.
Playground
This editor is interactive. Copy code from above, modify it, break it, experiment. Try changing x to a value outside [18, 90] and see how the trace changes.
If you have questions or want to discuss this post, join the ZK for Rust Devs Discord.
This is post #3 in the “first principles” ZK series on rustarians.com.
Halo2 Cheat Sheet
API patterns, security pitfalls, fork comparison. One-page PDF reference for halo2 development.
Get the Cheat Sheet