Skip to content

feat: connect c4 and c6 [skip-line-limit]#1473

Draft
ctrlc03 wants to merge 5 commits intomainfrom
feat/c4-c6
Draft

feat: connect c4 and c6 [skip-line-limit]#1473
ctrlc03 wants to merge 5 commits intomainfrom
feat/c4-c6

Conversation

@ctrlc03
Copy link
Collaborator

@ctrlc03 ctrlc03 commented Mar 23, 2026

first merge #1460

Summary by CodeRabbit

  • New Features

    • Added end-to-end commitment cross-checks during proof aggregation and generation.
    • Exposed proof output extraction to enable named-field validation across circuits.
    • Cached intermediate commitments to enable stronger pre-verification and fault attribution.
  • Bug Fixes

    • Detects and handles commitment mismatches, marks offending parties, retries aggregation with remaining honest contributions.
    • Improves error reporting for mismatched proof commitments.

@vercel
Copy link

vercel bot commented Mar 23, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
crisp Ready Ready Preview, Comment Mar 24, 2026 5:18pm
enclave-docs Ready Ready Preview, Comment Mar 24, 2026 5:18pm

Request Review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 23, 2026

📝 Walkthrough

Walkthrough

Implements C1 commitment extraction and verification across PK aggregation and ZK proof pipelines, adds output-extraction helpers, detects C1 commitment mismatches as a ZkError, handles mismatches by marking dishonest parties and re-aggregating, and adds C4→C6 cross-circuit commitment caching/checks.

Changes

Cohort / File(s) Summary
ZK output extraction
crates/zk-helpers/src/circuits/output_layout.rs, crates/zk-helpers/src/circuits/mod.rs, crates/events/src/enclave_event/proof.rs
Add CircuitOutputLayout, per-circuit output descriptors, and Proof::extract_output() / CircuitName::output_layout() for extracting named public outputs from proof public_signals.
Proof request / error surface
crates/events/src/enclave_event/compute_request/zk.rs
Add PkAggregationProofRequest.c1_commitments, ThresholdShareDecryptionProofRequest.c5_pk_commitment, and ZkError::C1CommitmentMismatch { mismatched_indices }.
Multithread ZK pre-checks
crates/multithread/src/multithread.rs
Before generating PK-aggregation proof, compute expected C1 commitments from inputs and compare against req.c1_commitments; return C1CommitmentMismatch with indices on mismatch.
Aggregator mismatch handling
crates/aggregator/src/publickey_aggregator.rs
Thread c1_signed_proofs and nodes_aligned through GeneratingC5Proof state, add handle_c1_commitment_mismatch() to mark dishonest parties, emit SignedProofFailed, filter keyshares, re-aggregate PK, recompute commitments, reset fold state, and restart C5 generation.
Share verification caching & cross-check
crates/zk-prover/src/actors/share_verification.rs
Add c4_cache and methods cache_c4_commitments() and check_c6_party_against_c4(); perform C4→C6 pre-checks for ThresholdDecryptionProofs and emit SignedProofFailed for mismatches.
Minor callsite / computation changes
crates/keyshare/src/threshold_keyshare.rs, crates/zk-helpers/src/circuits/threshold/pk_generation/computation.rs
Include c5_pk_commitment: None when creating ThresholdShareDecryptionProofRequest; derive Bits::pk_bit from preset/moduli instead of precomputed bounds.

Sequence Diagram(s)

sequenceDiagram
    actor Node as Prover/Node
    participant Multi as Multithread (ZK)
    participant PkAgg as PublicKeyAggregator
    participant Verif as ShareVerificationActor

    Node->>Multi: PkAggregationProofRequest (includes c1_commitments)
    Multi->>Multi: Validate req.c1_commitments.len()
    Multi->>Multi: Compute expected c1 commitments from inputs
    alt All commitments match
        Multi->>Multi: Generate C5 proof
        Multi->>Node: Return C5 proof result
    else Some mismatches
        Multi->>Node: Return ZkError::C1CommitmentMismatch { mismatched_indices }
        Node->>PkAgg: ComputeRequestError(C1CommitmentMismatch)
        PkAgg->>PkAgg: handle_c1_commitment_mismatch(mismatched_indices)
        PkAgg->>PkAgg: Mark mismatched parties dishonest
        PkAgg->>Verif: Emit SignedProofFailed for mismatches
        PkAgg->>PkAgg: Filter dishonest keyshares, re-aggregate PK
        PkAgg->>PkAgg: Recompute c1_commitments and reset fold
        PkAgg->>Multi: Restart C5 proof generation with filtered set
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • #1453: Adds per-e3_id commitment caching and cross-circuit verification in ShareVerificationActor—matches this PR's C4→C6 caching/check changes.
  • #1450: Similar per-e3_id caching and pre-verification checks design—related to the cache/check pattern introduced here.

Possibly related PRs

  • #1443: Directly touches PK aggregation flow and C1-commitment handling; strongly related to the mismatch/error wiring added here.
  • #1372: Modifies ZK proof handling and verification pipeline components touched by these changes (verification actors, proof requests).
  • #1296: Introduced/expanded SignedProof types and SignedProofFailed events that this PR emits and consumes.

Suggested labels

ciphernode

Suggested reviewers

  • hmzakhalid
  • cedoor
  • 0xjei

Poem

🐰 I hopped through commits, sniffed each byte,

Pulled C1s from tails in the moonlight,
If some proofs stray and do not align,
I thump, I mark, then recompute just fine —
Honest hops win; mismatches we consign. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: connect c4 and c6' directly describes the main objective of the pull request—establishing cross-check validation between C4 and C6 proofs, which is the central focus of changes across multiple files.
Docstring Coverage ✅ Passed Docstring coverage is 95.83% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/c4-c6

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
crates/keyshare/src/threshold_keyshare.rs (1)

2135-2144: ⚠️ Potential issue | 🟠 Major

This still leaves the C5→C6 binding disabled.

ThresholdKeyshare is the main producer of ThresholdShareDecryptionProofRequest, but it always sends c5_pk_commitment: None. So the new field never carries the C5-certified commitment on the normal path, and C6 cannot actually be bound to what C5 attested. Please persist the C5 commitment when the aggregated key/proof arrives and thread it through here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/keyshare/src/threshold_keyshare.rs` around lines 2135 - 2144, The code
always sets c5_pk_commitment: None when building a
ThresholdShareDecryptionProofRequest in ThresholdKeyshare, so the C5→C6 binding
is never carried; persist the C5 commitment when the aggregated key/proof is
received (the place that processes the aggregated_pk_bytes / incoming aggregated
proof) and store it on the ThresholdKeyshare instance or state, then replace
c5_pk_commitment: None with that stored commitment when constructing
ThresholdShareDecryptionProofRequest (alongside aggregated_pk_bytes and
proof_aggregation_enabled) so the C5-certified commitment is threaded through to
C6.
crates/aggregator/src/publickey_aggregator.rs (1)

394-407: ⚠️ Potential issue | 🔴 Critical

Critical: keyshare_bytes and c1_signed_proofs are misaligned in state.

At line 397, keyshare_bytes: honest_keyshares stores the original insertion-order vector, but c1_signed_proofs (line 398) was re-aligned at lines 357-369 to match the OrderedSet-ordered keyshare_bytes (line 356).

This creates an index mismatch: when handle_c1_commitment_mismatch receives a mismatched index, it will look up the wrong keyshare or proof.

The PkAggregationProofPending event published at line 380 uses keyshare_bytes.clone() (OrderedSet order), so the prover operates on different indices than what's stored in state.

🐛 Proposed fix
         self.state.try_mutate(&ec, |_| {
             Ok(PublicKeyAggregatorState::GeneratingC5Proof {
                 public_key: pubkey.clone(),
-                keyshare_bytes: honest_keyshares,
+                keyshare_bytes: keyshare_bytes.clone(),
                 c1_signed_proofs,
                 nodes: honest_nodes_set,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 394 - 407, The
state stores keyshare_bytes and c1_signed_proofs out of sync: when you re-order
keyshares to match OrderedSet earlier (the vector used for the
PkAggregationProofPending event and c1_signed_proofs), you must save that same
ordered vector into PublicKeyAggregatorState::GeneratingC5Proof so indices
align; update the construction so keyshare_bytes is the re-ordered/aligned
vector (the same variable used to build c1_signed_proofs and the
PkAggregationProofPending payload) and ensure c1_signed_proofs is created from
that same aligned vector (so handle_c1_commitment_mismatch looks up matching
entries).
🧹 Nitpick comments (3)
crates/aggregator/src/publickey_aggregator.rs (3)

872-879: Consider adding threshold check before re-aggregation.

The code only verifies remaining_count > 0 before re-aggregating. If remaining_count <= threshold_m, the subsequent C5 proof will fail. While the comment notes the circuit enforces this, adding an early check would avoid wasted computation and provide a clearer error message.

Note: threshold_m is not currently available in GeneratingC5Proof state, so this would require carrying it forward from VerifyingC1.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 872 - 879, The
code only checks remaining_count == 0 before re-aggregation; add an explicit
threshold check so re-aggregation is skipped if too few honest shares remain:
propagate threshold_m from VerifyingC1 into the GeneratingC5Proof state (carry
it through the state transition that creates GeneratingC5Proof), then before
using remaining_keyshares compare remaining_count against threshold_m (require
remaining_count > threshold_m) and return an Err with a clear message like
"insufficient honest parties after C1 filtering: remaining_count X <=
threshold_m Y" to avoid wasted C5 computation and give a precise failure reason.

28-59: Verbose debug logging should use debug! or trace! level.

The tracing::info! calls on lines 36-45 and 48-52 emit potentially large hex dumps of public signals for every C1 proof processed. This is useful during development but may flood production logs.

Consider using debug! or trace! level, or gating behind a feature flag.

🔧 Suggested change
-            tracing::info!(
+            tracing::debug!(
                 "C1 proof[{}]: circuit={:?}, public_signals_len={}, signals_hex={}",
                 i,
                 proof.circuit,
                 proof.public_signals.len(),
                 proof.public_signals[..std::cmp::min(128, proof.public_signals.len())]
                     .iter()
                     .map(|b| format!("{:02x}", b))
                     .collect::<String>()
             );
             let commitment = proof.extract_output("pk_commitment");
             if let Some(ref c) = commitment {
-                tracing::info!(
+                tracing::debug!(
                     "C1 proof[{}]: extracted pk_commitment={}",
                     i,
                     c.iter().map(|b| format!("{:02x}", b)).collect::<String>()
                 );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 28 - 59, The
current verbose tracing inside derive_c1_commitments uses tracing::info! for
large hex dumps and should be demoted to tracing::debug! or tracing::trace! (or
wrapped behind a feature flag) to avoid flooding production logs; update the two
tracing::info! calls and the tracing::warn!/info! that print extracted
pk_commitment to use debug!/trace! (or conditionally log when a debug feature or
env var is enabled) so the same diagnostic data is preserved for development but
not emitted at info level in production, locating the changes in the
derive_c1_commitments function around proof.public_signals logging and the
pk_commitment extraction log.

357-369: Keyshare-to-proof mapping assumes unique keyshares.

The HashMap keyed by ks.to_vec() will overwrite entries if two honest parties submitted identical keyshare bytes. While this would be a protocol violation (and OrderedSet already deduplicates keyshares), consider adding a debug assertion or log if duplicates are detected during the map build.

This is a defensive measure; the current logic is correct under normal operation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 357 - 369, The
code builds ks_to_proof from honest_keyshares and c1_signed_proofs using
ks.to_vec() as key which will silently overwrite if duplicate keyshare bytes
exist; add a defensive check while constructing ks_to_proof (e.g., iterate over
honest_keyshares.zip(c1_signed_proofs) and on each insert check the previous
value) and emit a debug_assert or a log/error if an insert would replace an
existing entry to catch duplicate keyshares; reference the honest_keyshares,
c1_signed_proofs, keyshare_bytes, and ks_to_proof identifiers when implementing
this check so the later map lookup logic remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 859-867: The current use of OrderedSet (backed by BTreeSet)
reorders elements so filtering `nodes` and `keyshare_bytes` by the same
`mismatched_indices` misaligns entries; fix by preserving (node, keyshare)
pairings and filtering those pairs before creating any OrderedSet. Concretely,
change the logic around `nodes`, `keyshare_bytes`, `mismatched_indices` and
`honest_entries` to iterate `honest_entries` as (node, keyshare) tuples, filter
out entries whose index is in `mismatched_indices`, then collect the remaining
nodes and keyshares from the filtered pairs (or construct OrderedSet from the
paired sequence) so indices remain aligned; update code that references `nodes`
or `keyshare_bytes` to use the filtered paired results instead of independently
sorted OrderedSets.

In `@crates/events/src/enclave_event/compute_request/zk.rs`:
- Around line 64-65: The new required field c1_commitments in
PkAggregationProofRequest (which is nested in PkAggregationProofPending) changes
the on‑wire shape and will break mixed-version deserialization; make this a
versioned addition instead: add a version tag/enum for PkAggregationProofRequest
or wrap PkAggregationProofPending in a versioned envelope, keep c1_commitments
optional or behind a new version variant, and update
serialization/deserialization logic to accept old versions (treat missing
c1_commitments as empty/None) while emitting the new version when serializing;
ensure references to PkAggregationProofRequest, PkAggregationProofPending, and
c1_commitments are updated so older nodes can still decode requests.

In `@crates/multithread/src/multithread.rs`:
- Around line 348-398: Check and enforce that all three arrays have the expected
length before zipping: validate req.c1_commitments.len(),
req.keyshare_bytes.len() (or the actual keyshare field name), and expected.len()
(from PkAggInputs::compute) are equal to req.committee_h (or all equal to each
other) and return a Zk error if not; then replace the zip over
expected.iter().zip(req.c1_commitments.iter()) in the C1 commitment loop with an
index-based loop (e.g., for i in 0..expected.len()) that indexes into
expected[i], req.c1_commitments[i], and req.keyshare_bytes[i] so no element is
silently skipped and mismatched_indices (and the ComputeRequestError::new path)
correctly capture all discrepancies.

In `@crates/zk-prover/src/actors/share_verification.rs`:
- Around line 166-198: The cross-check loop marks some parties as failed
(pre_dishonest via check_c6_party_against_c4 and emit_signed_proof_failed) but
verify_proofs is still called with the original msg.share_proofs, allowing
already-failed parties to be dispatched to ZK; before calling verify_proofs for
VerificationKind::ThresholdDecryptionProofs, filter msg.share_proofs to exclude
any party whose sender_party_id is present in pre_dishonest (e.g. build a new
Vec filtered_share_proofs from msg.share_proofs where party.sender_party_id ∉
pre_dishonest) and pass that filtered_share_proofs (and the merged
pre_dishonest) into verify_proofs so only non-failed parties are validated and
sent to ZK (refer to check_c6_party_against_c4, emit_signed_proof_failed,
pre_dishonest, verify_proofs, and VerifyShareProofs).

---

Outside diff comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 394-407: The state stores keyshare_bytes and c1_signed_proofs out
of sync: when you re-order keyshares to match OrderedSet earlier (the vector
used for the PkAggregationProofPending event and c1_signed_proofs), you must
save that same ordered vector into PublicKeyAggregatorState::GeneratingC5Proof
so indices align; update the construction so keyshare_bytes is the
re-ordered/aligned vector (the same variable used to build c1_signed_proofs and
the PkAggregationProofPending payload) and ensure c1_signed_proofs is created
from that same aligned vector (so handle_c1_commitment_mismatch looks up
matching entries).

In `@crates/keyshare/src/threshold_keyshare.rs`:
- Around line 2135-2144: The code always sets c5_pk_commitment: None when
building a ThresholdShareDecryptionProofRequest in ThresholdKeyshare, so the
C5→C6 binding is never carried; persist the C5 commitment when the aggregated
key/proof is received (the place that processes the aggregated_pk_bytes /
incoming aggregated proof) and store it on the ThresholdKeyshare instance or
state, then replace c5_pk_commitment: None with that stored commitment when
constructing ThresholdShareDecryptionProofRequest (alongside aggregated_pk_bytes
and proof_aggregation_enabled) so the C5-certified commitment is threaded
through to C6.

---

Nitpick comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 872-879: The code only checks remaining_count == 0 before
re-aggregation; add an explicit threshold check so re-aggregation is skipped if
too few honest shares remain: propagate threshold_m from VerifyingC1 into the
GeneratingC5Proof state (carry it through the state transition that creates
GeneratingC5Proof), then before using remaining_keyshares compare
remaining_count against threshold_m (require remaining_count > threshold_m) and
return an Err with a clear message like "insufficient honest parties after C1
filtering: remaining_count X <= threshold_m Y" to avoid wasted C5 computation
and give a precise failure reason.
- Around line 28-59: The current verbose tracing inside derive_c1_commitments
uses tracing::info! for large hex dumps and should be demoted to tracing::debug!
or tracing::trace! (or wrapped behind a feature flag) to avoid flooding
production logs; update the two tracing::info! calls and the
tracing::warn!/info! that print extracted pk_commitment to use debug!/trace! (or
conditionally log when a debug feature or env var is enabled) so the same
diagnostic data is preserved for development but not emitted at info level in
production, locating the changes in the derive_c1_commitments function around
proof.public_signals logging and the pk_commitment extraction log.
- Around line 357-369: The code builds ks_to_proof from honest_keyshares and
c1_signed_proofs using ks.to_vec() as key which will silently overwrite if
duplicate keyshare bytes exist; add a defensive check while constructing
ks_to_proof (e.g., iterate over honest_keyshares.zip(c1_signed_proofs) and on
each insert check the previous value) and emit a debug_assert or a log/error if
an insert would replace an existing entry to catch duplicate keyshares;
reference the honest_keyshares, c1_signed_proofs, keyshare_bytes, and
ks_to_proof identifiers when implementing this check so the later map lookup
logic remains unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5c4c1c5d-3afc-411d-bd29-fb23b92f99ae

📥 Commits

Reviewing files that changed from the base of the PR and between 12e2752 and bb073ae.

📒 Files selected for processing (9)
  • crates/aggregator/src/publickey_aggregator.rs
  • crates/events/src/enclave_event/compute_request/zk.rs
  • crates/events/src/enclave_event/proof.rs
  • crates/keyshare/src/threshold_keyshare.rs
  • crates/multithread/src/multithread.rs
  • crates/zk-helpers/src/circuits/mod.rs
  • crates/zk-helpers/src/circuits/output_layout.rs
  • crates/zk-helpers/src/circuits/threshold/pk_generation/computation.rs
  • crates/zk-prover/src/actors/share_verification.rs

@ctrlc03 ctrlc03 changed the title feat: connect c4 and c6 feat: connect c4 and c6 [skip-line-limit] Mar 23, 2026
@vercel vercel bot temporarily deployed to Preview – enclave-docs March 23, 2026 21:28 Inactive
@vercel vercel bot temporarily deployed to Preview – crisp March 23, 2026 21:28 Inactive
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (2)
crates/multithread/src/multithread.rs (1)

348-361: ⚠️ Potential issue | 🔴 Critical

Validate keyshare_bytes before PkAggInputs::compute.

Only c1_commitments and expected are length-checked here. PkAggInputs::compute walks 0..committee_h and indexes pk0_shares, so a short or extra req.keyshare_bytes list still turns this path into a panic instead of a ZK InvalidParams error.

♻️ Proposed fix
     if req.c1_commitments.len() != req.committee_h {
         return Err(make_zk_error(
             &request,
             format!(
                 "c1_commitments length {} != committee_h {}",
                 req.c1_commitments.len(),
                 req.committee_h
             ),
         ));
     }
+    if req.keyshare_bytes.len() != req.committee_h {
+        return Err(make_zk_error(
+            &request,
+            format!(
+                "keyshare_bytes length {} != committee_h {}",
+                req.keyshare_bytes.len(),
+                req.committee_h
+            ),
+        ));
+    }

     let pk_agg_inputs = PkAggInputs::compute(req.params_preset.clone(), &circuit_data)
         .map_err(|e| make_zk_error(&request, format!("PkAggInputs::compute: {}", e)))?;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/multithread/src/multithread.rs` around lines 348 - 361, Validate
req.keyshare_bytes length against req.committee_h before calling
PkAggInputs::compute: check that req.keyshare_bytes.len() == req.committee_h and
if not return Err(make_zk_error(&request, format!("InvalidParams: keyshare_bytes
length {} != committee_h {}", req.keyshare_bytes.len(), req.committee_h))). This
prevents PkAggInputs::compute from indexing pk0_shares (and similar) and
panicking; ensure the validation occurs prior to invoking PkAggInputs::compute.
crates/aggregator/src/publickey_aggregator.rs (1)

353-380: ⚠️ Potential issue | 🔴 Critical

Persist C5 retry state in the same order you send to multithread.

PkAggregationProofPending.keyshare_bytes is built in OrderedSet order, but GeneratingC5Proof.keyshare_bytes stores honest_keyshares (submission order) and handle_c1_commitment_mismatch resolves returned indices against honest_party_ids (party-id order). If those orders differ, a mismatched_indices response removes/blames the wrong party and re-aggregates the wrong share set.

Also applies to: 405-410, 813-823

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 353 - 380, The
retry state is being persisted using submission/party-id order (honest_keyshares
/ honest_nodes) while the multithreaded C5 work and
PkAggregationProofPending.keyshare_bytes use OrderedSet ordering, causing
mismatched blame/aggregation when orders differ; fix by normalizing everything
to the OrderedSet order: build and persist GeneratingC5Proof.keyshare_bytes and
any retry state from the OrderedSet-derived keyshare_bytes (use the same
keyshare_bytes, ks_to_proof and ks_to_node alignment logic shown) and modify
handle_c1_commitment_mismatch to interpret incoming indices against the
OrderedSet order (honest_keyshares_set / honest_nodes_set) so that
c1_signed_proofs, nodes_aligned, and persisted keyshare_bytes all share the same
ordering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 891-898: After filtering out C1-mismatched parties
(remaining_keyshares), re-check the threshold using threshold_m: current code
only checks remaining_count == 0, which allows continuing with <= threshold_m
parties; change the check to ensure remaining_count > threshold_m (i.e., return
an error if remaining_count <= threshold_m) so we abort the retry path when
there are not enough honest parties to meet the decryption threshold; update the
error message in the branch that inspects remaining_keyshares to reference this
threshold and include context (e.g., mention
honest_keyshares/remaining_keyshares and threshold_m) so the failure is clear.
- Around line 28-59: derive_c1_commitments currently uses filter_map and drops
None results from proof.extract_output("pk_commitment"), shortening
c1_commitments and breaking the 1:1 contract with keyshare_bytes; change
derive_c1_commitments to preserve indices by returning Vec<Option<ArcBytes>>
(update its signature) and replace the filter_map with a plain map that returns
the extracted Option (still logging success/failure as before), and then update
any callers (e.g., handle_pk_aggregation_proof) to accept and handle the
Vec<Option<ArcBytes>> so missing pk_commitment values are detected and handled
rather than silently removed.

In `@crates/zk-prover/src/actors/share_verification.rs`:
- Around line 112-115: The c4_cache (HashMap<E3id, HashMap<u64, C4Commitments>>)
is never evicted, leaking per-E3 data for the lifetime of the process; add an
explicit eviction path that removes the c4_cache entry when an E3 completes or
fails. Specifically, in the actor handlers that finalize an E3 (e.g., the
ThresholdDecryptionProofs completion/error handler—look for
handle_threshold_decryption_proofs or similar) add code to call
c4_cache.remove(&e3id) (or clear the inner map) and also implement/handle an
E3RequestComplete message/hook (e.g., handle_e3_request_complete) to remove the
same entry; ensure the removal uses the E3id used to populate the cache and is
done in the same actor/context to avoid concurrency issues.

---

Duplicate comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 353-380: The retry state is being persisted using
submission/party-id order (honest_keyshares / honest_nodes) while the
multithreaded C5 work and PkAggregationProofPending.keyshare_bytes use
OrderedSet ordering, causing mismatched blame/aggregation when orders differ;
fix by normalizing everything to the OrderedSet order: build and persist
GeneratingC5Proof.keyshare_bytes and any retry state from the OrderedSet-derived
keyshare_bytes (use the same keyshare_bytes, ks_to_proof and ks_to_node
alignment logic shown) and modify handle_c1_commitment_mismatch to interpret
incoming indices against the OrderedSet order (honest_keyshares_set /
honest_nodes_set) so that c1_signed_proofs, nodes_aligned, and persisted
keyshare_bytes all share the same ordering.

In `@crates/multithread/src/multithread.rs`:
- Around line 348-361: Validate req.keyshare_bytes length against
req.committee_h before calling PkAggInputs::compute: check that
req.keyshare_bytes.len() == req.committee_h and if not return
Err(make_zk_error(&request, format!("InvalidParams: keyshare_bytes length {} !=
committee_h {}", req.keyshare_bytes.len(), req.committee_h))). This prevents
PkAggInputs::compute from indexing pk0_shares (and similar) and panicking;
ensure the validation occurs prior to invoking PkAggInputs::compute.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d3748817-14fe-464f-8b79-31e629fae1cf

📥 Commits

Reviewing files that changed from the base of the PR and between bb073ae and 3671c3f.

📒 Files selected for processing (3)
  • crates/aggregator/src/publickey_aggregator.rs
  • crates/multithread/src/multithread.rs
  • crates/zk-prover/src/actors/share_verification.rs

Comment on lines +28 to +59
/// Derive c1_commitments from signed proofs by extracting pk_commitment from each.
fn derive_c1_commitments(signed_proofs: &[Option<SignedProofPayload>]) -> Vec<ArcBytes> {
signed_proofs
.iter()
.enumerate()
.filter_map(|(i, opt)| {
let sp = opt.as_ref()?;
let proof = &sp.payload.proof;
tracing::info!(
"C1 proof[{}]: circuit={:?}, public_signals_len={}, signals_hex={}",
i,
proof.circuit,
proof.public_signals.len(),
proof.public_signals[..std::cmp::min(128, proof.public_signals.len())]
.iter()
.map(|b| format!("{:02x}", b))
.collect::<String>()
);
let commitment = proof.extract_output("pk_commitment");
if let Some(ref c) = commitment {
tracing::info!(
"C1 proof[{}]: extracted pk_commitment={}",
i,
c.iter().map(|b| format!("{:02x}", b)).collect::<String>()
);
} else {
tracing::warn!("C1 proof[{}]: failed to extract pk_commitment", i);
}
commitment
})
.collect()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't filter_map away missing pk_commitments.

If extract_output("pk_commitment") returns None, this shortens c1_commitments and breaks the 1:1 contract with keyshare_bytes. handle_pk_aggregation_proof rejects that as InvalidParams, and this actor only logs non-mismatch request errors, so one bad extraction can wedge C5 generation instead of attributing the party.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 28 - 59,
derive_c1_commitments currently uses filter_map and drops None results from
proof.extract_output("pk_commitment"), shortening c1_commitments and breaking
the 1:1 contract with keyshare_bytes; change derive_c1_commitments to preserve
indices by returning Vec<Option<ArcBytes>> (update its signature) and replace
the filter_map with a plain map that returns the extracted Option (still logging
success/failure as before), and then update any callers (e.g.,
handle_pk_aggregation_proof) to accept and handle the Vec<Option<ArcBytes>> so
missing pk_commitment values are detected and handled rather than silently
removed.

Comment on lines +891 to +898
// Check if enough honest parties remain
let remaining_count = remaining_keyshares.len();
// We need > 0 honest parties; the circuit enforces the threshold check
if remaining_count == 0 {
return Err(anyhow::anyhow!(
"No honest parties remaining after C1 commitment mismatch filtering"
));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Re-check the threshold after removing mismatched C1 parties.

The initial path aborts when honest_keyshares.len() <= threshold_m, but the retry path only checks remaining_count == 0. After a C1 mismatch this can continue with fewer than threshold_m + 1 parties and publish a PK for a committee that can no longer satisfy threshold decryption.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/aggregator/src/publickey_aggregator.rs` around lines 891 - 898, After
filtering out C1-mismatched parties (remaining_keyshares), re-check the
threshold using threshold_m: current code only checks remaining_count == 0,
which allows continuing with <= threshold_m parties; change the check to ensure
remaining_count > threshold_m (i.e., return an error if remaining_count <=
threshold_m) so we abort the retry path when there are not enough honest parties
to meet the decryption threshold; update the error message in the branch that
inspects remaining_keyshares to reference this threshold and include context
(e.g., mention honest_keyshares/remaining_keyshares and threshold_m) so the
failure is clear.

Comment on lines +112 to 115
/// Cached C4 return commitments per party, keyed by E3 ID.
/// Populated after C4 verification passes; consumed during C6 verification.
c4_cache: HashMap<E3id, HashMap<u64, C4Commitments>>,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add an eviction path for c4_cache.

This actor is process-wide, but the new cache is only ever populated. Without cleanup on ThresholdDecryptionProofs completion/error (or an E3RequestComplete hook), every E3 leaves per-party commitments resident indefinitely.

Also applies to: 122-123, 419-447

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/zk-prover/src/actors/share_verification.rs` around lines 112 - 115,
The c4_cache (HashMap<E3id, HashMap<u64, C4Commitments>>) is never evicted,
leaking per-E3 data for the lifetime of the process; add an explicit eviction
path that removes the c4_cache entry when an E3 completes or fails.
Specifically, in the actor handlers that finalize an E3 (e.g., the
ThresholdDecryptionProofs completion/error handler—look for
handle_threshold_decryption_proofs or similar) add code to call
c4_cache.remove(&e3id) (or clear the inner map) and also implement/handle an
E3RequestComplete message/hook (e.g., handle_e3_request_complete) to remove the
same entry; ensure the removal uses the E3id used to populate the cache and is
done in the same actor/context to avoid concurrency issues.

Comment on lines +419 to +447
/// Cache C4 return commitments from decryption proofs for later C6 cross-check.
fn cache_c4_commitments(
&mut self,
e3_id: &E3id,
parties: &[PartyShareDecryptionProofsToVerify],
) {
let cache = self.c4_cache.entry(e3_id.clone()).or_default();
for party in parties {
let sk = party
.signed_sk_decryption_proof
.payload
.proof
.extract_output("commitment");
let esm: Vec<ArcBytes> = party
.signed_e_sm_decryption_proofs
.iter()
.filter_map(|p| p.payload.proof.extract_output("commitment"))
.collect();

if let Some(sk_commitment) = sk {
cache.insert(
party.sender_party_id,
C4Commitments {
sk_commitment,
e_sm_commitments: esm,
},
);
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't let missing cached commitments pass the C4→C6 check.

cache_c4_commitments drops missing outputs, and check_c6_party_against_c4 returns None on cache miss, empty ESM cache, or short public_signals. In the caller, None means “keep verifying”, so the new cross-circuit binding is skipped հենց when the cached data is absent or malformed.

Also applies to: 457-493

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Cross-circuit check: C4→C6 — verify C6 expected_sk/e_sm_commitment match C4 return values

1 participant