Appendix D: Advanced Polynomial Commitment Schemes

This appendix covers polynomial commitment schemes that achieve specialized trade-offs beyond the main KZG and IPA schemes presented in Chapter 9. These schemes are important for specific applications but involve more complex cryptographic machinery.

Hyrax: Square-Root Commitments via Tensor Structure

Chapter 9's IPA scheme suffers from linear verification time: the verifier must compute the folded generators, doing work for a polynomial with coefficients. Hyrax (Wahby et al., 2018) reduces verification to by exploiting the tensor structure of multilinear polynomials. The key insight is that polynomial evaluation can be written as a vector-matrix-vector product, and this matrix structure enables a commitment scheme where the prover commits to rows separately.

The Core Idea: From Flat Vectors to Matrices

A multilinear polynomial over variables has coefficients. The naive approach stores these as a flat vector and commits with a single Pedersen commitment using generators. Evaluation then requires work.

The key insight: reshape the flat vector into a matrix. Instead of a length- vector, arrange the coefficients as a matrix :

The entry stores the coefficient , which corresponds to the evaluation at the Boolean point whose binary representation concatenates and .

Why does this help? Because polynomial evaluation decomposes into a vector-matrix-vector product, and we can commit to rows separately. This reduces verification from to .

Tensor Structure of Multilinear Evaluation

Recall from Chapter 5 that multilinear evaluation uses the equality polynomial:

where .

The key observation: factors across the split. If we partition the evaluation point where and , then:

Define the Lagrange coefficient vectors:

Then evaluation becomes a bilinear form. Starting from the MLE definition:

Split each index where indexes rows and indexes columns:

Factor the equality polynomial:

Substitute the Lagrange vectors and :

This is a rank-2 tensor contraction: two vectors contracting with a matrix. The key insight is that the factorization of lets us separate the "row selection" () from the "column selection" ().

This is why the matrix reshaping matters: a flat vector evaluation requires touching all terms, but the matrix form can be computed in two steps. First compute (a length- vector), then compute (a single dot product). Each step involves only operations.

The Hyrax Commitment Scheme

Public Parameters

Random generators and for blinding.

Commitment

Instead of committing to all coefficients at once (which would require generators), commit to each row separately:

where is a blinding factor for row . The full commitment is the tuple of row commitments:

This requires group elements, not one. The trade-off: larger commitment size for cheaper verification.

The Opening Protocol

To prove where :

Step 1: Both parties compute Lagrange vectors

From the evaluation point , both prover and verifier compute:

  • : row Lagrange coefficients from
  • : column Lagrange coefficients from

Step 2: Prover computes the projection vector

The prover computes , the weighted column sums:

Each is the -weighted sum of column .

Step 3: Verifier computes combined commitment (MSM #1)

The verifier combines the original row commitments (from the commitment phase) using :

This is computed by the verifier during opening, not as part of the initial commitment. The verifier doesn't have access to the matrix , but they don't need it. By Pedersen's homomorphism, a linear combination of commitments is a commitment to the linear combination of the underlying vectors:

The inner sum is exactly the projection vector , where each is defined in Step 2. So would equal if the prover computed correctly. The verifier doesn't know yet, but they have computed what a commitment to the correct should be.

Step 4: Verify consistency (MSM #2)

The prover sends . The verifier computes a commitment to the claimed :

Check:

If , the prover's is consistent with the committed matrix. The verifier derived from the row commitments (which bind the prover to ), so equality means the prover computed correctly.

Step 5: Verify the dot product

Check:

Why this proves evaluation

The tensor contraction gives:

So the dot product check verifies that the claimed value equals the polynomial evaluation.

Zero-knowledge variant

The prover doesn't send directly (which would leak information about ). Instead, both checks are combined into a ZK dot product protocol that proves consistency without revealing .

Zero-Knowledge Dot Product Protocol

Hyrax uses a Schnorr-style protocol for proving where is committed (with blinding) and is public.

Setup

Prover holds with Pedersen commitment and blinding factor .

Protocol

  1. Prover picks random masking vector and blinding
  2. Prover sends commitment and masked dot product
  3. Verifier sends random challenge
  4. Prover responds with and
  5. Verifier checks:
    • (commitment consistency)
    • (dot product relation)

The first check ensures opens the linear combination . The second check verifies that , which holds only if .

Communication cost: field elements (the response vector ).

Worked Example: Hyrax on a 4-Variable Polynomial

Let's trace through Hyrax for variables, so evaluations arranged as a matrix.

Setup

Polynomial evaluations on arranged as matrix (row index = first 2 bits, column index = last 2 bits):

Generators: and blinding generator . Evaluation point: (working over reals for clarity).

Step 1: Commitment phase

Prover commits to each row (omitting blinding for clarity):

The commitment is : four group elements.

Step 2: Compute Lagrange vectors

Split where and .

Similarly . Both prover and verifier compute these from the evaluation point.

Step 3: Compute projection vector

The prover computes . Each is the -weighted sum of column :

So . The prover sends (in the non-ZK variant).

Step 4: Two MSMs

MSM #1: Combine row commitments with :

Expanding:

MSM #2: Commit to using generators:

Check: ✓ (The projection vector is consistent with the committed matrix.)

Step 5: Dot product check

Check:

Verification cost

The verifier performed two MSMs of size 4 (not 16), plus field arithmetic for the dot product. Total: group operations.

Using Bulletproofs for Logarithmic Proof Size

The basic Hyrax protocol has communication because the prover sends (length ) in the Schnorr-style dot product proof. This can be reduced to by replacing Schnorr with Bulletproofs' inner product argument.

Bulletproofs (Bünz et al., 2018) proves with proof size but verifier time (linear in vector length). When applied to Hyrax's dot product step (vectors of length ):

  • Proof size: (row commitments dominate)
  • Verifier time: (MSM for plus Bulletproofs verification on length- vectors)

Generalized Trade-off: The Parameter

The Hyrax paper introduces a generalization parameter that controls a communication vs. computation trade-off. Instead of a square matrix, arrange the coefficients as :

  • (square-root): matrix, commitment, verification
  • : matrix, commitment, verification
  • General: commitment size, verification time

Higher reduces commitment size (fewer row commitments) at the cost of higher verification time (longer dot product vectors). Since the commitment is sent once but may be opened many times, the square-root case () typically offers the best balance.

Hyrax: Properties and Trade-offs

PropertyHyrax (square-root, )
Trusted setupNone (Transparent)
Commitment size group elements
Proof size with Bulletproofs
Verification time group operations
Prover time for commitment, per opening
AssumptionDiscrete log only
Quantum-safeNo

Comparison with IPA:

IPAHyrax
Commitment size
Verification time
Proof size

Both IPA and Hyrax (with Bulletproofs) achieve logarithmic proof size, but Hyrax trades larger commitments for faster verification. This trade-off is worthwhile when:

  • The same polynomial is opened at multiple points (amortizes commitment cost)
  • Verification speed matters more than proof/commitment size
  • You want transparency without paying IPA's linear verification cost

Connection to Dory

Hyrax's square-root verification is an improvement over IPA's linear verification, but can we do better? Dory answers yes by combining Hyrax's matrix structure with pairings.

The key observation: Hyrax's verifier bottleneck is the MSM . This is group operations. Dory eliminates this by:

  1. Tier 2 commitment: Instead of storing row commitments directly, Dory combines them into a single element using pairings
  2. Lazy verification: The verifier never computes explicitly; instead, they track commitments in and verify everything with a single final pairing check

Where Hyrax achieves verification, Dory achieves . The cost is more complex cryptographic machinery (pairings, two-tier structure, SXDH assumption instead of plain discrete log).


Dory: Logarithmic Verification Without Trusted Setup

Hyrax reduces IPA's verification to by exploiting tensor structure. Dory (Lee, 2021) pushes further to by combining Hyrax's matrix arrangement with pairings.

The core idea is deferred verification. In IPA, the verifier recalculates the folded generators at each step, doing work. Dory's verifier instead accumulates commitments and defers all verification to a single final pairing check. The algebraic structure of pairings makes this possible: the verifier can "absorb" all the folding challenges into target group elements, then verify everything at once.

Note: Dory is one of the more advanced commitment schemes covered in this book. The two-tier structure, pairing-based folding, and binding arguments involve subtle cryptographic reasoning. Don't worry if the details don't click on first reading; the key intuition is that pairings allow verification to happen "in the target group" without the verifier touching the original generators directly.

Two-Tier Commitment Structure

Dory commits to polynomials using AFGHO commitments (Abe et al.'s structure-preserving commitments) combined with Pedersen commitments.

Public parameters (SRS): Generated transparently by sampling random group elements (the notation means "sampled uniformly at random from"):

  • : commitment key for row commitments
  • : commitment key for final commitment
  • , : blinding generators (for hiding/zero-knowledge)
  • : derived blinding generator in

All parameters are public. The prover's secrets are the blinding factors .

Tier 1: Row Commitments ()

Treat the polynomial coefficients as a matrix . For each row , compute a Pedersen commitment:

where is a secret blinding factor. This produces elements in .

Tier 2: Final Commitment ()

Combine row commitments via pairing with generators :

where is a final blinding factor. This produces one element (the commitment).

Why two tiers?

TierPurpose
Tier 1 (rows)Enables streaming: process row-by-row with memory
Row commitments serve as "hints" for efficient batch opening
Tier 2 ()Provides succinctness: one element regardless of polynomial size
Binding under SXDH assumption in Type III pairings

The AFGHO commitment is hiding because is uniformly random in . Both tiers are additively homomorphic, which the evaluation protocol relies on.

From Coefficients to Matrix Form

Why matrices? A multilinear polynomial evaluation can be written as a vector-matrix-vector product. The evaluation point splits into:

  • Row coordinates : selects which row
  • Column coordinates : selects which column

This mirrors the coefficient arrangement: .

Each half determines a vector of Lagrange coefficients via the equality polynomial:

where are the bits of index . We use for row (left) and for column (right) coefficients, distinct from the evaluation point .

The evaluation becomes a bilinear form:

Worked example (): For :

The Opening Protocol (Dory-Innerproduct)

The key reduction: Polynomial evaluation becomes an inner product. Define two vectors:

  • , the matrix times the column Lagrange vector. Each entry is row evaluated at the column coordinates.
  • , the row Lagrange vector.

Then . The inner product of these two vectors is the polynomial evaluation.

Goal: Prove for committed vectors, which proves for the polynomial.

The Language: Dory proves membership in:

In words: commits to (using ), commits to (using ), and commits to their inner product. The protocol proves these three commitments are consistent, that the same vectors appear in all three.

How Verification Works (The Key Insight)

The question: The prover knows , , and . The verifier can compute and from the evaluation point, but doesn't know . How can the verifier check without the matrix?

The answer: The verifier never needs directly. Instead:

Step 1: The verifier has the commitment (which encodes cryptographically) and the claimed evaluation .

Step 2: The prover sends a VMV message where:

  • (row commitments combined with row Lagrange coefficients)

Recall from earlier. This is the non-hiding variant; the row commitments already contain blinding from tier 1.

Step 3: First verification check. The verifier checks:

Why this works: By Pedersen linearity:

Note that is a row vector, while is a column vector. However, both represent "partial evaluations" of the matrix. The key point: is determined by the row commitments and Lagrange coefficients. The check verifies that the prover's is consistent with the row commitments . This binds the prover's intermediate computation to the committed polynomial.

Step 4: The verifier computes (not from the prover).

The verifier computes this themselves from the claimed evaluation . This is how the claimed value enters the protocol: it's bound to the blinding generator . If the prover lied about , then won't match the prover's internal computation, and the final check will fail.

Step 5: Initialize verifier state.

  • (from VMV message)
  • the polynomial commitment (the tier-2 commitment the verifier already has)
  • from VMV message
  • as computed above

What remains to prove: The prover must demonstrate that . That is, the intermediate vector (committed implicitly via the consistency check) inner-producted with yields the claimed evaluation. This is where Dory-Reduce takes over.

The Folding Protocol (Dory-Reduce)

Each round halves the problem size. Given vectors of length , the round uses two challenges (, then ) and two prover messages:

First message (before any challenge):

  • , (cross-pairings of halves with generator halves)
  • , (cross-pairings of halves with generator halves)

Verifier sends first challenge

Prover updates vectors:

Second message (computed with -modified vectors):

  • , (cross inner products of modified vectors)

Verifier sends second challenge

Prover folds vectors:

Verifier updates accumulators (no pairing checks, just arithmetic):

where is a precomputed SRS value (the pairing of generator prefixes at round ).

Recurse with vectors of length .

After rounds, vectors have length 1.

Final pairing check: After all rounds:

where primes denote folded values, and is a final challenge.

The invariant: Throughout folding, satisfy:

  • (inner product commitment)
  • , (commitments to each vector)

The verifier does no per-round pairing checks, only accumulator updates. Soundness comes from the final check verifying this invariant for the length-1 vectors.

Why Binding Works

The prover provides row commitments alongside the tier-2 commitment. Why can't the prover cheat by providing fake rows?

  1. Tier 2 constrains Tier 1: The tier-2 commitment is a deterministic function of the row commitments. Changing any changes .

  2. Tier 1 constrains the data: Each is a Pedersen commitment. Under discrete log hardness, the prover cannot find two different row vectors that produce the same .

  3. No trapdoor: The SRS generators are sampled randomly. Without their discrete logs, the prover is computationally bound to the original coefficients.

If the Dory proof verifies, then with overwhelming probability (under SXDH), the prover knew valid openings for all original commitments.

Dory: Properties and Trade-offs

PropertyDory
Trusted setupNone (Transparent)
Commitment size (one element)
Proof size group elements
Verification time (the key improvement!)
Prover time for commitment, per opening
AssumptionSXDH (on Type III pairings)
Quantum-safeNo (uses pairings)

Dory uses pairings (like KZG) but achieves transparency (like IPA). It gets logarithmic verification (better than IPA's linear) at the cost of more complex pairing machinery. This makes Dory particularly attractive for systems with many polynomial openings that can be batched (like Jolt's zkVM), where the amortized cost per opening becomes very small.

Implementations like Jolt store row commitments as "opening hints." This increases proof size by per polynomial but enables efficient batch opening without recomputing expensive MSMs. For Jolt's ~26 committed polynomials with , this means ~26 KB of hints instead of ~800 bytes, but saves massive computation during batch verification.

Batching multiple polynomials exploits Pedersen's homomorphism. When batching polynomials with random linear combination coefficient , we combine corresponding rows across all polynomials:

Row of has coefficients . By linearity of Pedersen commitments, . The joint row commitments feed directly into Dory-Reduce, avoiding expensive MSM recomputations.

Why Dory Achieves Logarithmic Verification

Why does Dory achieve logarithmic verification while IPA requires linear time? IPA's linear cost comes from computing folded generators. Dory sidesteps this entirely: the verifier works with commitments in , updating accumulators each round without touching generators. The algebraic structure of pairings () lets the verifier "absorb" folding challenges into commitments. The precomputed values handle the generator contributions.

HyperKZG and Zeromorph: KZG for multilinear polynomials

Hyrax and Dory build new commitment schemes from scratch. A different strategy reuses the existing univariate KZG infrastructure (trusted setup, pairing verification, batching) and adds a reduction layer that converts multilinear evaluation claims into univariate ones. HyperKZG (Setty, 2023) and Zeromorph (Kohrita and Towa, 2023) both take this approach, with different trade-offs.

The shared problem

A multilinear polynomial over variables has coefficients (its evaluations on the Boolean hypercube). The prover commits to by interpreting these values as coefficients of a univariate polynomial and computing a standard KZG commitment using a powers-of-tau SRS.

Commitment is straightforward. The difficulty is opening: given a multilinear evaluation point , proving that using only the univariate commitment . The multilinear evaluation is not the same as the univariate evaluation at some single point. A reduction is needed.

HyperKZG

HyperKZG (an adaptation of Gemini by Setty) reduces the multilinear evaluation to univariate claims via a protocol resembling sum-check. The prover sends auxiliary univariate polynomials, one per variable, that represent the intermediate "partial bindings" as each is fixed to . The verifier checks consistency between consecutive polynomials using KZG openings.

The protocol proceeds in rounds. In round , the prover has a polynomial of degree (starting from ). The prover splits into even and odd coefficients, commits to the odd-coefficient polynomial, and the verifier sends challenge . The prover computes , halving the degree. After rounds, is a constant equal to .

Verification requires group operations and 3 pairings (via batching). Proof size is group elements plus field elements. The prover runs in field operations.

Zeromorph

Zeromorph takes a more algebraic route. It uses the identity that for any multilinear and evaluation point :

where each is a quotient polynomial. This is the multilinear analogue of the univariate fact that divides . Zeromorph maps this identity to the univariate setting: the prover commits to univariate encodings of the and proves the divisibility relation holds under the encoding.

The result is a proof with group elements (slightly smaller than HyperKZG) and verification with group operations plus 3 pairings. Zeromorph also achieves the zero-knowledge property more cheaply: adding ZK costs only extra group operations, compared to an extra -size MSM in naive approaches.

Comparison and practical relevance

PropertyHyperKZGZeromorphDoryHyrax
SetupTrusted (SRS)Trusted (SRS)TransparentTransparent
Commitment size
Proof size
Verification + 3 pairings + 3 pairings pairings
Prover
ZK overheadModerateLow ( ops)LowLow

HyperKZG and Zeromorph occupy the same niche: constant-size commitments with logarithmic proofs, leveraging existing KZG infrastructure. Their main advantage over Dory and Hyrax is compatibility with the powers-of-tau ceremonies already deployed for Groth16 and PLONK. Systems that already have a trusted setup (Ethereum's KZG ceremony, for example) can adopt HyperKZG or Zeromorph with no additional setup cost.

In practice, Jolt uses HyperKZG as its default PCS (with Zeromorph as an alternative). Nova's implementation also supports HyperKZG. For systems that require transparency, Dory or hash-based schemes (Basefold, FRI) are preferred.

Recent work (Mercury, Samaritan) pushes further toward proof size while maintaining prover time, representing the current frontier of KZG-based multilinear commitments.