Chapter 14: Lookup Arguments
In 2019, ZK engineers hit a wall.
They wanted to verify standard computer programs, things like SHA-256 or ECDSA signatures, but the circuits were exploding in size. The culprit was bit decomposition. Operations that are trivial in silicon (bitwise XOR, range checks, comparisons) require decomposing values into individual bits, processing each bit, and reassembling. A single XOR takes roughly 30 constraints. A range check proving costs 32 boolean constraints. Verifying a 64-bit CPU instruction set was like simulating a Ferrari using only wooden gears.
Ariel Gabizon and Zachary Williamson realized they didn't need to simulate the gears. They just needed to check the answer key. This realization, that you can replace computation with table lookups, broke the bottleneck. Instead of decomposing values into bits, just look up the answer in a precomputed table.
The insight built on earlier work (Bootle et al.'s 2018 "Arya" paper had explored lookup-style arguments), but Plookup made it practical by repurposing PLONK's permutation machinery. Range checks become a lookup into a table of valid values. Bitwise operations become a lookup into a table of valid input-output triples. Membership in these tables costs a few constraints, regardless of what the table encodes. The architecture shifted, and complexity moved from constraint logic to precomputed data.
The field accelerated. Haböck's LogUp (2022) replaced grand products with sums of logarithmic derivatives, eliminating sorting overhead and enabling cleaner multi-table arguments. Setty, Thaler, and Wahby's Lasso (2023) achieved prover costs scaling with lookups performed rather than table size, enabling tables of size , large enough to hold the evaluation table of any 64-bit instruction. The "lookup singularity" emerged: a vision of circuits that do nothing but look things up in precomputed tables.
Today, every major zkVM relies on lookups. Cairo, RISC-Zero, SP1, and Jolt prove instruction execution not by encoding CPU semantics in constraints, but by verifying that each instruction's behavior matches its entry in a precomputed table. Complexity moves from constraint logic to precomputed data.
The Lookup Problem
Chapter 13 introduced the grand product argument for copy constraints in PLONK. The idea: to prove that wire values at positions related by permutation are equal, compute . If the permutation constraint is satisfied (values at linked positions match), this product telescopes to 1. Lookup arguments generalize this technique from equality to containment, proving not that two multisets are the same, but that one is contained in another.
The formal problem:
Given a multiset of witness values (the "lookups") and a public multiset (the "table"), prove .
The name "lookup" comes from how these proofs work in practice. Imagine you're proving a circuit that computes XOR. The table contains all valid XOR triples: . Your circuit claims for some witness values. Rather than encoding XOR algebraically, you "look up" the triple in the table. If it's there, the XOR is correct. The multiset collects all the triples your circuit needs to verify; the subset claim says every lookup found a valid entry.
A dictionary example makes this concrete. Imagine you want to prove you spelled "Cryptography" correctly. The arithmetic approach would be to write down the rules of English grammar and phonetics, then derive the spelling from first principles. Slow, complex, error-prone. The lookup approach would be to open the Oxford English Dictionary to page 412, point to the word "Cryptography," and say "there." The lookup argument is proving that your tuple (the word you claim) exists in the set (all valid English words). You don't need to understand why it's valid; you just need to show it's in the book.
The Naive Approach: Product of Roots
A natural idea: two multisets are equal iff the polynomials having those elements as roots are equal. If every lookup appears in the table , we can write:
where counts how many times table entry appears among the lookups.
Example: Lookups into table .
- Left side:
- Right side:
The polynomials match because the multisets match: contains two 2s and one 5, which is exactly what the multiplicities , encode.
This identity is mathematically valid, but expensive to verify in a circuit. Computing requires the binary decomposition of each multiplicity . If lookups can repeat up to times, each multiplicity needs bits, blowing up the circuit inputs.
Different lookup protocols avoid this cost in different ways. Plookup sidesteps multiplicities entirely by using a sorted merge. LogUp transforms the product into a sum where multiplicities become simple coefficients rather than exponents.
Plookup
Plookup's insight is to transform the subset claim into a permutation claim. The construction involves three objects:
- : the lookup values (what you're looking up, your witness data)
- : the table (all valid values, public and precomputed)
- : the sorted merge of and (auxiliary, constructed by prover)
The key is that encodes how fits into . If every is in , then is just with duplicates inserted at the right places.
Plookup's Sorted Vector
Define , the concatenation of lookup values and table values, sorted.
If , then every element of appears somewhere in . In the sorted vector , elements from "slot in" next to their matching elements from .
For every adjacent pair in , either:
- (a repeated value, meaning some was inserted next to its matching ), or
- is also an adjacent pair in the sorted table
If some , then contains a transition that doesn't exist in , and the check fails.
Example (3-bit range check):
- Lookups: (prover claims both are in )
- Table:
- Sorted:
Adjacent pairs in :
The pairs and are repeats; these correspond to the lookups. All other pairs appear as adjacent pairs in . The subset claim holds.
If instead :
- Sorted:
- The pair is neither a repeat nor an adjacent pair in
- The subset claim fails
Plookup's Grand Product Check
The adjacent-pair property translates to a polynomial identity via a grand product. The construction is clever, so let's build it step by step.
The core idea is to encode each adjacent pair as a single field element . The term acts as a "separator": different pairs map to different field elements (with high probability over random ). Multiplying all these pair-encodings together gives a fingerprint of the multiset of adjacent pairs.
, the fingerprint of 's adjacent pairs:
This is just the product of all adjacent-pair encodings in the sorted vector .
, the fingerprint we expect if :
Where does this come from? Think about what looks like when . The sorted merge contains the table as a "backbone," with lookup values from inserted as duplicates next to their matches. So the adjacent pairs in fall into two categories:
-
Pairs from : The consecutive pairs from the original table. These appear in regardless of what contains; they're the skeleton that gets merged into. In , these correspond to the last product , which doesn't factorize.
-
Repeated pairs from inserting : When a lookup value slots into next to its matching table entry, we get a repeated pair . The encoding of is . This does factorize. So the repeated pairs contribute to .
is the fingerprint of exactly these pairs, the table backbone plus valid duplicate insertions. If (the actual fingerprint of ) equals , then has the right structure: no "bad" transitions like that would appear if some .
Let's use a 3-element table to see the algebra concretely.
- Table: (so )
- Lookups: (so )
- Sorted merge:
Computing (fingerprint of 's adjacent pairs):
The pairs in are: . Encode each:
Computing (expected fingerprint):
- Table pairs : and
- Lookup duplicate: contributes
Why ? Notice that the pair in encodes as . This factors! So 's middle term equals 's term. The other two terms match directly. The products are identical.
Claim (Plookup): if and only if and is correctly formed.
Completeness: If , then consists of 's pairs plus repeated pairs for each lookup. Each repeated pair encodes as , which exactly matches 's structure.
Soundness: If some , then when sorted into , creates an adjacent pair or where neither nor equals . This "bad transition" doesn't appear in 's table backbone, and can't factor as either. For random , the probability that despite this mismatch is at most by Schwartz-Zippel (the products have total degree at most in ).
The following implementation computes and for the 3-bit range check example above:
def encode_pair(a, b, beta, gamma):
"""Encode adjacent pair (a, b) as a field element."""
return gamma * (1 + beta) + a + beta * b
def plookup_check(lookups, table, beta=2, gamma=5):
"""Verify lookups subset of table via Plookup grand product."""
s = sorted(lookups + table)
# G: fingerprint of s's adjacent pairs
G = 1
for i in range(len(s) - 1):
G *= encode_pair(s[i], s[i+1], beta, gamma)
# F: expected fingerprint = (1+beta)^n * prod(gamma + f_i) * prod(table pairs)
F = (1 + beta) ** len(lookups)
for f in lookups:
F *= (gamma + f)
for i in range(len(table) - 1):
F *= encode_pair(table[i], table[i+1], beta, gamma)
return F, G, (F == G)
# 3-bit range check: {2, 5} in [0, 7]
plookup_check([2, 5], list(range(8))) # (563374005, 563374005, True)
# Invalid: 9 not in table
plookup_check([2, 9], list(range(8))) # F != G, returns False
Integrating with PLONK
The grand product check is the mathematical core of Plookup (Gabizon-Williamson 2020). But to use it in a SNARK, we need to encode the check as polynomial constraints that PLONK can verify. This means:
- The table becomes a polynomial committed during setup
- The sorted vector becomes polynomials the prover commits to
- The check becomes an accumulator that the verifier checks via a single polynomial identity
Setup
The table is public and fixed before any proof. Encode it as a polynomial where for each table entry. This polynomial is committed once and reused across all proofs; the verifier never touches the full table during verification.
The prover holds witness values to look up. These are private.
Prover Computation
The prover's job is to construct the sorted vector and prove without revealing the witness values.
-
Construct : Merge and , then sort. This is the -sorted vector from the theory above.
-
Split into : The sorted vector has length (lookups plus table), but PLONK's evaluation domain has size matching the circuit. To fit into the constraint system, split it into two polynomials and . The constraints will check adjacent pairs within each half and across the boundary.
-
Commit to sorted polynomials: Send to the verifier.
-
Receive challenges: After Fiat-Shamir, obtain . These randomize the fingerprint encoding, making it infeasible for a cheating prover to forge a valid .
-
Build accumulator: Construct , the polynomial that computes the running ratio. It starts at 1, accumulates one ratio term per domain point, and returns to 1 if the lookup is valid.
-
Commit to accumulator: Send .
Constraints
Recall the goal: prove , where is the expected fingerprint and is the actual fingerprint of 's adjacent pairs. In PLONK, we encode this as polynomial identities checked via the quotient polynomial.
The accumulator computes a running ratio of and terms. If , the ratio telescopes to 1 over the full domain.
Initialization: starts at 1.
Recursion: At each domain point, accumulates one step of the ratio. The left side encodes adjacent pairs from (split across ); the right side encodes the expected terms (table pairs and lookup duplicates):
The parameter is the number of lookups per gate (typically 1 or 2).
If , then returns to 1 at the end of the domain as the product telescopes. We don't add an explicit finalization constraint for this. Instead, the recursion constraint forces . Since by initialization, and we're working over a cyclic domain, the constraint system implicitly checks that the final value is 1.
The accumulator alone isn't sufficient. It verifies that adjacent pairs in are valid, but what if the prover constructs a fake that doesn't actually contain the lookup values ? The grand product equality handles this: the left side of the recursion constraint multiplies over pairs from , while the right side multiplies over and . For the products to match, the multisets must be equal. This is the same principle as the permutation argument in Chapter 13, but here it's embedded directly in the accumulator constraint rather than as a separate check.
The constraint assumes is sorted, since that's what makes duplicates land next to their matches. Plookup enforces this implicitly rather than with an explicit sorting check. The adjacent-pair encoding captures ordering information: since must be "sorted by " (elements appear in the same order as in ), each adjacent pair in must appear exactly once as an adjacent pair in . If the prover reorders , the adjacent pairs change, and the grand product fails. The randomness prevents the prover from constructing a fake that happens to produce the same product despite having different pairs.
Both properties are enforced by the single recursion constraint:
- The grand product equality ensures contains exactly , with no values conjured from thin air.
- The adjacent-pair encoding ensures every consecutive pair is valid (either a repeat or a table step).
- The same encoding implicitly enforces sorting: reordering changes its adjacent pairs, breaking the grand product.
If all hold, every element in found a matching entry in . A cheating prover cannot slip in a value outside the table since it would create an invalid pair that breaks the accumulator.
Verification
The verifier checks the polynomial identities (initialization, recursion) via the standard PLONK batched evaluation. Crucially, the verifier never touches the table directly. The table polynomial was committed during setup, and the verifier only checks openings at random evaluation points. Verification cost is independent of table size : a lookup into a 256-entry table costs the same as a lookup into a million-entry table.
Comparison: Custom Gates vs. Lookup Tables
Both custom gates and lookup tables extend PLONK beyond vanilla arithmetic, but they solve different problems.
Custom gates add terms to the universal gate equation. For example, adding a selector enables computation in a single constraint:
This works well for Poseidon S-boxes, which need fifth powers. The constraint is low-degree, requires no precomputation, and adds no extra commitments. But custom gates hit a wall when the relation isn't algebraically compact. A boolean check is easy: has degree 2. A 16-bit range check would need , a degree-65536 polynomial that no proof system can handle efficiently.
Lookup tables solve this by shifting complexity from constraint degree to table size. Instead of encoding "x is in " as a high-degree polynomial, we precompute a table of valid values and prove membership via the grand product. As we saw in the Verification section, the verifier never touches the table directly, so verification cost scales with the number of lookups, not the table size.
The tradeoff is that lookups add overhead. Each lookup requires entries in the sorted vector , contributions to the accumulator polynomial, and additional commitment openings. For a simple boolean check, this machinery is overkill. For a 64-bit range check or an 8-bit XOR operation, lookups are necessary.
| Problem | Custom Gate | Lookup Table |
|---|---|---|
| Boolean check () | Ideal | Overkill |
| 8-bit range check | Possible | Efficient |
| 64-bit range check | Impractical | Essential |
| XOR/AND/OR operations | Complex | Clean |
| Poseidon | One gate | Unnecessary |
| Valid opcode check | Complex | Direct |
Modern systems like UltraPLONK use both: custom gates for algebraic primitives, lookup tables for everything else.
Alternative Lookup Protocols
Plookup was seminal but not unique. Several alternatives offer different trade-offs.
LogUp: The Logarithmic Derivative Approach
Recall the naive product identity from the beginning of this chapter:
Plookup avoided the multiplicity problem by using the sorted merge . LogUp takes a different route: transform the product into a sum where multiplicities become coefficients rather than exponents. Taking the logarithmic derivative (i.e., ) of both sides, and using and :
The exponentiation that required binary decomposition becomes simple scalar multiplication . Over finite fields, we don't actually compute logs or derivatives; the identity is purely algebraic. If the multisets match, the rational functions are equal. Evaluating at a random challenge gives Schwartz-Zippel soundness.
This matters for several reasons:
-
No sorting required. Plookup requires constructing and committing to the sorted vector . LogUp skips this entirely: no sorted polynomial, no sorting constraints.
-
Additive structure. Products become sums of fractions. This enables:
- Simpler multi-table handling (just add the sums)
- Natural integration with sum-check protocols
- Easier batching of multiple lookup arguments
-
Better cross-table lookups. When a circuit uses multiple tables (range, XOR, opcodes), LogUp handles them in a unified sum rather than separate grand products.
Worked Example: Lookups into table over .
The multiplicities are . The verifier sends random challenge . Both sides evaluate at :
Left side (lookups):
Over : and (since and ). So the left side is .
Right side (table with multiplicities):
Both sides equal 15. The identity holds.
Verification:
If we had tried to look up (with ), the left side would include . The left sum becomes . No assignment of multiplicities to the table entries can make the right side equal 74, so the check fails with overwhelming probability over the choice of .
The LogUp bus
LogUp's additive structure enables a pattern that has become the standard architecture in STARK-based zkVMs: the bus argument. When a system has multiple specialized components (an ALU chip, a memory chip, a program counter chip), each component produces or consumes values that must be consistent across components. A CPU chip "sends" an addition operation to the ALU chip, which "receives" it and checks .
The bus formalizes this as a global sum constraint. Each sender contributes for a value it sends. Each receiver contributes for a value it receives. If every sent value is received exactly once, the global sum is zero:
This is just the LogUp identity rewritten: senders play the role of lookups , receivers play the role of table . The zero-sum condition replaces the multiset equality check. Each component adds one auxiliary "running sum" column to its trace, accumulating its contribution row by row. The boundary constraint asserts that the global sum of all components' final running-sum values is zero.
The bus scales linearly with the number of components ( for tables) rather than quadratically () as pairwise permutation arguments would require. Every major STARK-based zkVM (SP1, RISC Zero, Stwo, OpenVM) now uses LogUp bus arguments for inter-component consistency. Chapter 20 discusses how interaction columns implementing LogUp fit into the STARK prover pipeline.
LogUp-GKR combines the bus with the GKR protocol (Chapter 7) for even greater efficiency. Instead of committing to a helper column for the reciprocals , the prover uses a GKR interactive proof to verify the fractional sums directly. This eliminates helper columns entirely, adding only interaction rounds. StarkWare's Stwo prover uses LogUp-GKR over Mersenne31.
cq (Cached Quotients)
A refinement of the logarithmic derivative approach optimized for repeated lookups.
cq pre-computes quotient polynomials for the table, amortizing table processing across multiple lookup batches. The trade-off is setup overhead; benefits emerge with many lookups against the same table.
Caulk and Caulk+
Caulk (2022) asked a different question: what if the table is huge but you only perform a few lookups? Plookup's prover work scales linearly with table size, making it impractical for tables of size or larger.
The core idea: encode the set (or table) as a polynomial , whose roots are exactly the set elements. To prove that a value is in the set, observe that divides iff is a root. KZG lets you prove this divisibility via a quotient polynomial , without revealing which root is. The quotient commitment can be computed from the table commitment using properties of KZG, and this computation is sublinear in .
Prover work is for lookups into a table of size , sublinear in when . The trade-off: Caulk requires trusted setup (KZG), and the quadratic term in limits scalability for many lookups.
Caulk is actually a general membership proof protocol: given a KZG commitment to a set, prove that certain values belong to that set without revealing which positions they occupy. This makes it useful beyond lookup tables, e.g., as an alternative to Merkle proofs for set membership. Plookup and LogUp can't serve this role because they require the prover to process the entire table during proving, which defeats the purpose of a compact membership proof. Caulk's sublinear prover cost is what enables the generalization.
Caulk+ refined this to prover complexity, removing the term entirely.
Halo2 Lookups
Halo2, developed by the Electric Coin Company (Zcash), integrates lookups natively with a "permutation argument" variant rather than Plookup's grand product.
The core idea: to prove (lookups are contained in table ), the prover constructs permuted columns and such that is a permutation of , is a permutation of , and in each row either (a repeat) or (a table match). This forces every element in to equal some element in . The permutation constraints are enforced via a grand product argument similar to PLONK's copy constraints. Unlike Plookup, there is no explicit sorted merge; the "sorting" happens implicitly through the permutation.
Halo2's lookup API lets developers define tables declaratively. The proving system handles the constraint generation automatically. This made Halo2 popular for application circuits: you specify what to look up, not how the lookup argument works. Scroll, Taiko, and other L2s built on Halo2 rely on its lookup system for zkEVM implementation.
Lasso and Jolt
All the protocols above (Plookup, LogUp, Caulk, Halo2) share a limitation: the prover must commit to polynomials whose degree scales with table size.
For Plookup, the sorted vector has length (lookups plus table). For LogUp, the multiplicity polynomial has degree . For Caulk, the table polynomial must be committed during setup. In every case, a table of size means million-coefficient polynomials. A table of size means polynomials with more coefficients than atoms in a grain of sand.
This is the memory bottleneck. It's not just "expensive"; it's a hard wall. The evaluation table of a 64-bit ADD instruction has entries. No computer can store that polynomial, let alone commit to it.
Early zkVMs worked around this by using small tables (8-bit or 16-bit operations) and paying the cost in constraint complexity for larger operations. A 64-bit addition became a cascade of 8-bit additions with carry propagation. It worked, but it was slow.
Lasso (2023, Setty-Thaler-Wahby) breaks through this wall: prover costs scale with lookups performed rather than table size.
Static vs. Dynamic Tables
Before diving into Lasso's mechanism, distinguish two types of lookups:
Static tables (read-only): Fixed functions like XOR, range checks, or AES S-boxes. The table never changes during execution. Plookup, LogUp, and Lasso excel here.
Dynamic tables (read-write): Simulating RAM (random access memory). The table starts empty and fills up as the program runs. This requires different techniques (like memory-checking arguments or timestamp-based permutation checks) because the table itself is witness-dependent.
Lasso focuses on static tables, but its decomposition insight is what makes truly large tables tractable.
Decomposable Tables
Lasso exploits decomposable tables. Many tables have structure: their MLE (multilinear extension) can be written as a weighted sum of smaller subtables:
Each subtable looks at only a small chunk of the total input . This "Structure of Sums" (SoS) property enables dramatic efficiency gains. (This is a cousin of the tensor product structure for Lagrange bases in Chapter 4—both exploit how multilinear functions over product domains inherit structure from their factors.)
Consider 64-bit AND. The conceptual table has entries (all pairs of 64-bit inputs). But bitwise AND decomposes perfectly: split inputs into sixteen 4-bit chunks, perform 16 lookups into a tiny 256-entry AND_4 table, concatenate results. The prover never touches the -entry table.
Why Prover Costs Scale with Lookups
Lasso represents the sparse access pattern—which indices were hit, how many times—using commitment schemes optimized for sparse polynomials, then proves correctness via sum-check. The prover commits only to the accessed entries and their multiplicities, never to the full table. For structured tables, the verifier can evaluate at a random challenge point in time using the table's algebraic formula, without ever seeing the table itself.
Jolt: A zkVM Built on Lasso
Jolt applies Lasso to build a complete zkVM for RISC-V. The philosophy: replace arithmetization of instruction semantics with lookups.
The entire RISC-V instruction set can be viewed as one giant table mapping (opcode, operand1, operand2) to results. This table is far too large to materialize, but it's decomposable: most instructions break into independent operations on small chunks. A 64-bit XOR decomposes into 16 lookups into a 256-entry XOR_4 table. The subtables are tiny, pre-computed once, and reused across all instructions.
Jolt combines Lasso (for instruction semantics) with R1CS constraints (for wiring: program counter updates, register consistency, data flow). Why this hybrid? Arithmetizing a 64-bit XOR in R1CS requires 64+ constraints for bit decomposition; Jolt proves it with 16 cheap lookups. But simple wiring constraints are trivial in R1CS. Use each tool where it excels.
Limitations
Lasso and Jolt require decomposable table structure. Tables without chunk-independent structure don't benefit. But for CPU instruction sets, the structure is natural: most operations are bitwise or arithmetic with clean chunk decompositions.
The field continues evolving. The core insight (reducing set membership to polynomial identity) admits many instantiations, each optimizing for different table sizes, structures, and use cases.
Lookups Across Proving Systems
The lookup techniques above: Plookup, LogUp, Lasso, adapt to different proving backends. Plookup and Halo2 integrate naturally with PLONK's polynomial commitment model. Lasso and Jolt use sum-check and R1CS (via Spartan). STARK-based systems take a different path.
In STARKs, computation is represented as an execution trace: a matrix where each row is a state and columns hold registers, memory, and auxiliary values. Lookup arguments integrate by adding columns to this trace:
- The lookup table becomes one or more public columns (known to the verifier)
- Values to be looked up appear in witness columns
- A running product column accumulates the grand product (Plookup-style) or running sum (LogUp-style)
- Transition constraints enforce the recursive accumulator relation row-by-row
The FRI-based polynomial commitment then proves that these trace columns satisfy all constraints. The lookup argument's algebraic core is unchanged; only the commitment mechanism differs.
STARK-based zkVMs (Cairo, RISC0, SP1) rely heavily on this integration. Their execution traces naturally represent VM state transitions, and lookups handle instruction semantics, memory consistency, and range checks. The trace-based model makes it easy to add new lookup tables: just add columns and constraints.
Key Takeaways
General principles (apply to all lookup arguments):
-
Lookup arguments shift complexity from logic to data: Precompute valid tuples; prove membership rather than computation. This is the core insight shared by Plookup, LogUp, Lasso, and all variants.
-
The formal problem: Given lookups and table , prove . Different protocols reduce this multiset inclusion to different polynomial identities.
-
Cost structure: Lookup-based proofs achieve roughly constant cost per lookup, independent of the logical complexity of what the table encodes. A 16-bit range check or an 8-bit XOR costs the same as a simple membership test.
-
Complements custom gates: Lookups handle non-algebraic constraints; custom gates handle algebraic primitives. Modern systems (UltraPLONK, Halo2) use both.
-
zkVM foundation: Without lookup arguments, verifying arbitrary computation at scale would be infeasible. Every major zkVM relies on lookups for instruction semantics.
Plookup-specific mechanics (the sorted-merge approach from Section 2):
-
Sorted vector reduction: Plookup transforms into a claim about the sorted merge .
-
Adjacent pair property: In Plookup, every consecutive pair in is either a repeat (from slotting in) or exists as adjacent in .
-
Grand product identity: The polynomial identity encodes Plookup's adjacent-pair check. The accumulator enforces this recursively, integrating with PLONK's permutation machinery.
Alternative approaches (different trade-offs):
-
LogUp replaces products with sums of logarithmic derivatives: no sorting, cleaner multi-table handling, natural sum-check integration.
-
Caulk achieves sublinear prover work in table size via KZG-based subset arguments, useful when few lookups access a huge table.
-
Halo2 uses permutation arguments rather than sorted merges, with lookups integrated into the constraint system declaratively.
-
Lasso exploits decomposable tables (SoS structure) to achieve prover costs scaling with lookups performed, not table size. Combined with sparse polynomial commitments, this enables effective tables of size . Jolt applies this to build a complete zkVM.
-
STARK integration: Lookup arguments adapt to trace-based proving via running product/sum columns and transition constraints, used by Cairo, RISC0, and SP1.