Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
📝 WalkthroughWalkthroughImplements C1 commitment extraction and verification across PK aggregation and ZK proof pipelines, adds output-extraction helpers, detects C1 commitment mismatches as a ZkError, handles mismatches by marking dishonest parties and re-aggregating, and adds C4→C6 cross-circuit commitment caching/checks. Changes
Sequence Diagram(s)sequenceDiagram
actor Node as Prover/Node
participant Multi as Multithread (ZK)
participant PkAgg as PublicKeyAggregator
participant Verif as ShareVerificationActor
Node->>Multi: PkAggregationProofRequest (includes c1_commitments)
Multi->>Multi: Validate req.c1_commitments.len()
Multi->>Multi: Compute expected c1 commitments from inputs
alt All commitments match
Multi->>Multi: Generate C5 proof
Multi->>Node: Return C5 proof result
else Some mismatches
Multi->>Node: Return ZkError::C1CommitmentMismatch { mismatched_indices }
Node->>PkAgg: ComputeRequestError(C1CommitmentMismatch)
PkAgg->>PkAgg: handle_c1_commitment_mismatch(mismatched_indices)
PkAgg->>PkAgg: Mark mismatched parties dishonest
PkAgg->>Verif: Emit SignedProofFailed for mismatches
PkAgg->>PkAgg: Filter dishonest keyshares, re-aggregate PK
PkAgg->>PkAgg: Recompute c1_commitments and reset fold
PkAgg->>Multi: Restart C5 proof generation with filtered set
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/keyshare/src/threshold_keyshare.rs (1)
2135-2144:⚠️ Potential issue | 🟠 MajorThis still leaves the C5→C6 binding disabled.
ThresholdKeyshareis the main producer ofThresholdShareDecryptionProofRequest, but it always sendsc5_pk_commitment: None. So the new field never carries the C5-certified commitment on the normal path, and C6 cannot actually be bound to what C5 attested. Please persist the C5 commitment when the aggregated key/proof arrives and thread it through here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/keyshare/src/threshold_keyshare.rs` around lines 2135 - 2144, The code always sets c5_pk_commitment: None when building a ThresholdShareDecryptionProofRequest in ThresholdKeyshare, so the C5→C6 binding is never carried; persist the C5 commitment when the aggregated key/proof is received (the place that processes the aggregated_pk_bytes / incoming aggregated proof) and store it on the ThresholdKeyshare instance or state, then replace c5_pk_commitment: None with that stored commitment when constructing ThresholdShareDecryptionProofRequest (alongside aggregated_pk_bytes and proof_aggregation_enabled) so the C5-certified commitment is threaded through to C6.crates/aggregator/src/publickey_aggregator.rs (1)
394-407:⚠️ Potential issue | 🔴 CriticalCritical:
keyshare_bytesandc1_signed_proofsare misaligned in state.At line 397,
keyshare_bytes: honest_keysharesstores the original insertion-order vector, butc1_signed_proofs(line 398) was re-aligned at lines 357-369 to match theOrderedSet-orderedkeyshare_bytes(line 356).This creates an index mismatch: when
handle_c1_commitment_mismatchreceives a mismatched index, it will look up the wrong keyshare or proof.The
PkAggregationProofPendingevent published at line 380 useskeyshare_bytes.clone()(OrderedSet order), so the prover operates on different indices than what's stored in state.🐛 Proposed fix
self.state.try_mutate(&ec, |_| { Ok(PublicKeyAggregatorState::GeneratingC5Proof { public_key: pubkey.clone(), - keyshare_bytes: honest_keyshares, + keyshare_bytes: keyshare_bytes.clone(), c1_signed_proofs, nodes: honest_nodes_set,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/aggregator/src/publickey_aggregator.rs` around lines 394 - 407, The state stores keyshare_bytes and c1_signed_proofs out of sync: when you re-order keyshares to match OrderedSet earlier (the vector used for the PkAggregationProofPending event and c1_signed_proofs), you must save that same ordered vector into PublicKeyAggregatorState::GeneratingC5Proof so indices align; update the construction so keyshare_bytes is the re-ordered/aligned vector (the same variable used to build c1_signed_proofs and the PkAggregationProofPending payload) and ensure c1_signed_proofs is created from that same aligned vector (so handle_c1_commitment_mismatch looks up matching entries).
🧹 Nitpick comments (3)
crates/aggregator/src/publickey_aggregator.rs (3)
872-879: Consider adding threshold check before re-aggregation.The code only verifies
remaining_count > 0before re-aggregating. Ifremaining_count <= threshold_m, the subsequent C5 proof will fail. While the comment notes the circuit enforces this, adding an early check would avoid wasted computation and provide a clearer error message.Note:
threshold_mis not currently available inGeneratingC5Proofstate, so this would require carrying it forward fromVerifyingC1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/aggregator/src/publickey_aggregator.rs` around lines 872 - 879, The code only checks remaining_count == 0 before re-aggregation; add an explicit threshold check so re-aggregation is skipped if too few honest shares remain: propagate threshold_m from VerifyingC1 into the GeneratingC5Proof state (carry it through the state transition that creates GeneratingC5Proof), then before using remaining_keyshares compare remaining_count against threshold_m (require remaining_count > threshold_m) and return an Err with a clear message like "insufficient honest parties after C1 filtering: remaining_count X <= threshold_m Y" to avoid wasted C5 computation and give a precise failure reason.
28-59: Verbose debug logging should usedebug!ortrace!level.The
tracing::info!calls on lines 36-45 and 48-52 emit potentially large hex dumps of public signals for every C1 proof processed. This is useful during development but may flood production logs.Consider using
debug!ortrace!level, or gating behind a feature flag.🔧 Suggested change
- tracing::info!( + tracing::debug!( "C1 proof[{}]: circuit={:?}, public_signals_len={}, signals_hex={}", i, proof.circuit, proof.public_signals.len(), proof.public_signals[..std::cmp::min(128, proof.public_signals.len())] .iter() .map(|b| format!("{:02x}", b)) .collect::<String>() ); let commitment = proof.extract_output("pk_commitment"); if let Some(ref c) = commitment { - tracing::info!( + tracing::debug!( "C1 proof[{}]: extracted pk_commitment={}", i, c.iter().map(|b| format!("{:02x}", b)).collect::<String>() );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/aggregator/src/publickey_aggregator.rs` around lines 28 - 59, The current verbose tracing inside derive_c1_commitments uses tracing::info! for large hex dumps and should be demoted to tracing::debug! or tracing::trace! (or wrapped behind a feature flag) to avoid flooding production logs; update the two tracing::info! calls and the tracing::warn!/info! that print extracted pk_commitment to use debug!/trace! (or conditionally log when a debug feature or env var is enabled) so the same diagnostic data is preserved for development but not emitted at info level in production, locating the changes in the derive_c1_commitments function around proof.public_signals logging and the pk_commitment extraction log.
357-369: Keyshare-to-proof mapping assumes unique keyshares.The
HashMapkeyed byks.to_vec()will overwrite entries if two honest parties submitted identical keyshare bytes. While this would be a protocol violation (andOrderedSetalready deduplicates keyshares), consider adding a debug assertion or log if duplicates are detected during the map build.This is a defensive measure; the current logic is correct under normal operation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/aggregator/src/publickey_aggregator.rs` around lines 357 - 369, The code builds ks_to_proof from honest_keyshares and c1_signed_proofs using ks.to_vec() as key which will silently overwrite if duplicate keyshare bytes exist; add a defensive check while constructing ks_to_proof (e.g., iterate over honest_keyshares.zip(c1_signed_proofs) and on each insert check the previous value) and emit a debug_assert or a log/error if an insert would replace an existing entry to catch duplicate keyshares; reference the honest_keyshares, c1_signed_proofs, keyshare_bytes, and ks_to_proof identifiers when implementing this check so the later map lookup logic remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 859-867: The current use of OrderedSet (backed by BTreeSet)
reorders elements so filtering `nodes` and `keyshare_bytes` by the same
`mismatched_indices` misaligns entries; fix by preserving (node, keyshare)
pairings and filtering those pairs before creating any OrderedSet. Concretely,
change the logic around `nodes`, `keyshare_bytes`, `mismatched_indices` and
`honest_entries` to iterate `honest_entries` as (node, keyshare) tuples, filter
out entries whose index is in `mismatched_indices`, then collect the remaining
nodes and keyshares from the filtered pairs (or construct OrderedSet from the
paired sequence) so indices remain aligned; update code that references `nodes`
or `keyshare_bytes` to use the filtered paired results instead of independently
sorted OrderedSets.
In `@crates/events/src/enclave_event/compute_request/zk.rs`:
- Around line 64-65: The new required field c1_commitments in
PkAggregationProofRequest (which is nested in PkAggregationProofPending) changes
the on‑wire shape and will break mixed-version deserialization; make this a
versioned addition instead: add a version tag/enum for PkAggregationProofRequest
or wrap PkAggregationProofPending in a versioned envelope, keep c1_commitments
optional or behind a new version variant, and update
serialization/deserialization logic to accept old versions (treat missing
c1_commitments as empty/None) while emitting the new version when serializing;
ensure references to PkAggregationProofRequest, PkAggregationProofPending, and
c1_commitments are updated so older nodes can still decode requests.
In `@crates/multithread/src/multithread.rs`:
- Around line 348-398: Check and enforce that all three arrays have the expected
length before zipping: validate req.c1_commitments.len(),
req.keyshare_bytes.len() (or the actual keyshare field name), and expected.len()
(from PkAggInputs::compute) are equal to req.committee_h (or all equal to each
other) and return a Zk error if not; then replace the zip over
expected.iter().zip(req.c1_commitments.iter()) in the C1 commitment loop with an
index-based loop (e.g., for i in 0..expected.len()) that indexes into
expected[i], req.c1_commitments[i], and req.keyshare_bytes[i] so no element is
silently skipped and mismatched_indices (and the ComputeRequestError::new path)
correctly capture all discrepancies.
In `@crates/zk-prover/src/actors/share_verification.rs`:
- Around line 166-198: The cross-check loop marks some parties as failed
(pre_dishonest via check_c6_party_against_c4 and emit_signed_proof_failed) but
verify_proofs is still called with the original msg.share_proofs, allowing
already-failed parties to be dispatched to ZK; before calling verify_proofs for
VerificationKind::ThresholdDecryptionProofs, filter msg.share_proofs to exclude
any party whose sender_party_id is present in pre_dishonest (e.g. build a new
Vec filtered_share_proofs from msg.share_proofs where party.sender_party_id ∉
pre_dishonest) and pass that filtered_share_proofs (and the merged
pre_dishonest) into verify_proofs so only non-failed parties are validated and
sent to ZK (refer to check_c6_party_against_c4, emit_signed_proof_failed,
pre_dishonest, verify_proofs, and VerifyShareProofs).
---
Outside diff comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 394-407: The state stores keyshare_bytes and c1_signed_proofs out
of sync: when you re-order keyshares to match OrderedSet earlier (the vector
used for the PkAggregationProofPending event and c1_signed_proofs), you must
save that same ordered vector into PublicKeyAggregatorState::GeneratingC5Proof
so indices align; update the construction so keyshare_bytes is the
re-ordered/aligned vector (the same variable used to build c1_signed_proofs and
the PkAggregationProofPending payload) and ensure c1_signed_proofs is created
from that same aligned vector (so handle_c1_commitment_mismatch looks up
matching entries).
In `@crates/keyshare/src/threshold_keyshare.rs`:
- Around line 2135-2144: The code always sets c5_pk_commitment: None when
building a ThresholdShareDecryptionProofRequest in ThresholdKeyshare, so the
C5→C6 binding is never carried; persist the C5 commitment when the aggregated
key/proof is received (the place that processes the aggregated_pk_bytes /
incoming aggregated proof) and store it on the ThresholdKeyshare instance or
state, then replace c5_pk_commitment: None with that stored commitment when
constructing ThresholdShareDecryptionProofRequest (alongside aggregated_pk_bytes
and proof_aggregation_enabled) so the C5-certified commitment is threaded
through to C6.
---
Nitpick comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 872-879: The code only checks remaining_count == 0 before
re-aggregation; add an explicit threshold check so re-aggregation is skipped if
too few honest shares remain: propagate threshold_m from VerifyingC1 into the
GeneratingC5Proof state (carry it through the state transition that creates
GeneratingC5Proof), then before using remaining_keyshares compare
remaining_count against threshold_m (require remaining_count > threshold_m) and
return an Err with a clear message like "insufficient honest parties after C1
filtering: remaining_count X <= threshold_m Y" to avoid wasted C5 computation
and give a precise failure reason.
- Around line 28-59: The current verbose tracing inside derive_c1_commitments
uses tracing::info! for large hex dumps and should be demoted to tracing::debug!
or tracing::trace! (or wrapped behind a feature flag) to avoid flooding
production logs; update the two tracing::info! calls and the
tracing::warn!/info! that print extracted pk_commitment to use debug!/trace! (or
conditionally log when a debug feature or env var is enabled) so the same
diagnostic data is preserved for development but not emitted at info level in
production, locating the changes in the derive_c1_commitments function around
proof.public_signals logging and the pk_commitment extraction log.
- Around line 357-369: The code builds ks_to_proof from honest_keyshares and
c1_signed_proofs using ks.to_vec() as key which will silently overwrite if
duplicate keyshare bytes exist; add a defensive check while constructing
ks_to_proof (e.g., iterate over honest_keyshares.zip(c1_signed_proofs) and on
each insert check the previous value) and emit a debug_assert or a log/error if
an insert would replace an existing entry to catch duplicate keyshares;
reference the honest_keyshares, c1_signed_proofs, keyshare_bytes, and
ks_to_proof identifiers when implementing this check so the later map lookup
logic remains unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 5c4c1c5d-3afc-411d-bd29-fb23b92f99ae
📒 Files selected for processing (9)
crates/aggregator/src/publickey_aggregator.rscrates/events/src/enclave_event/compute_request/zk.rscrates/events/src/enclave_event/proof.rscrates/keyshare/src/threshold_keyshare.rscrates/multithread/src/multithread.rscrates/zk-helpers/src/circuits/mod.rscrates/zk-helpers/src/circuits/output_layout.rscrates/zk-helpers/src/circuits/threshold/pk_generation/computation.rscrates/zk-prover/src/actors/share_verification.rs
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (2)
crates/multithread/src/multithread.rs (1)
348-361:⚠️ Potential issue | 🔴 CriticalValidate
keyshare_bytesbeforePkAggInputs::compute.Only
c1_commitmentsandexpectedare length-checked here.PkAggInputs::computewalks0..committee_hand indexespk0_shares, so a short or extrareq.keyshare_byteslist still turns this path into a panic instead of a ZKInvalidParamserror.♻️ Proposed fix
if req.c1_commitments.len() != req.committee_h { return Err(make_zk_error( &request, format!( "c1_commitments length {} != committee_h {}", req.c1_commitments.len(), req.committee_h ), )); } + if req.keyshare_bytes.len() != req.committee_h { + return Err(make_zk_error( + &request, + format!( + "keyshare_bytes length {} != committee_h {}", + req.keyshare_bytes.len(), + req.committee_h + ), + )); + } let pk_agg_inputs = PkAggInputs::compute(req.params_preset.clone(), &circuit_data) .map_err(|e| make_zk_error(&request, format!("PkAggInputs::compute: {}", e)))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/multithread/src/multithread.rs` around lines 348 - 361, Validate req.keyshare_bytes length against req.committee_h before calling PkAggInputs::compute: check that req.keyshare_bytes.len() == req.committee_h and if not return Err(make_zk_error(&request, format!("InvalidParams: keyshare_bytes length {} != committee_h {}", req.keyshare_bytes.len(), req.committee_h))). This prevents PkAggInputs::compute from indexing pk0_shares (and similar) and panicking; ensure the validation occurs prior to invoking PkAggInputs::compute.crates/aggregator/src/publickey_aggregator.rs (1)
353-380:⚠️ Potential issue | 🔴 CriticalPersist C5 retry state in the same order you send to multithread.
PkAggregationProofPending.keyshare_bytesis built inOrderedSetorder, butGeneratingC5Proof.keyshare_bytesstoreshonest_keyshares(submission order) andhandle_c1_commitment_mismatchresolves returned indices againsthonest_party_ids(party-id order). If those orders differ, amismatched_indicesresponse removes/blames the wrong party and re-aggregates the wrong share set.Also applies to: 405-410, 813-823
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/aggregator/src/publickey_aggregator.rs` around lines 353 - 380, The retry state is being persisted using submission/party-id order (honest_keyshares / honest_nodes) while the multithreaded C5 work and PkAggregationProofPending.keyshare_bytes use OrderedSet ordering, causing mismatched blame/aggregation when orders differ; fix by normalizing everything to the OrderedSet order: build and persist GeneratingC5Proof.keyshare_bytes and any retry state from the OrderedSet-derived keyshare_bytes (use the same keyshare_bytes, ks_to_proof and ks_to_node alignment logic shown) and modify handle_c1_commitment_mismatch to interpret incoming indices against the OrderedSet order (honest_keyshares_set / honest_nodes_set) so that c1_signed_proofs, nodes_aligned, and persisted keyshare_bytes all share the same ordering.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 891-898: After filtering out C1-mismatched parties
(remaining_keyshares), re-check the threshold using threshold_m: current code
only checks remaining_count == 0, which allows continuing with <= threshold_m
parties; change the check to ensure remaining_count > threshold_m (i.e., return
an error if remaining_count <= threshold_m) so we abort the retry path when
there are not enough honest parties to meet the decryption threshold; update the
error message in the branch that inspects remaining_keyshares to reference this
threshold and include context (e.g., mention
honest_keyshares/remaining_keyshares and threshold_m) so the failure is clear.
- Around line 28-59: derive_c1_commitments currently uses filter_map and drops
None results from proof.extract_output("pk_commitment"), shortening
c1_commitments and breaking the 1:1 contract with keyshare_bytes; change
derive_c1_commitments to preserve indices by returning Vec<Option<ArcBytes>>
(update its signature) and replace the filter_map with a plain map that returns
the extracted Option (still logging success/failure as before), and then update
any callers (e.g., handle_pk_aggregation_proof) to accept and handle the
Vec<Option<ArcBytes>> so missing pk_commitment values are detected and handled
rather than silently removed.
In `@crates/zk-prover/src/actors/share_verification.rs`:
- Around line 112-115: The c4_cache (HashMap<E3id, HashMap<u64, C4Commitments>>)
is never evicted, leaking per-E3 data for the lifetime of the process; add an
explicit eviction path that removes the c4_cache entry when an E3 completes or
fails. Specifically, in the actor handlers that finalize an E3 (e.g., the
ThresholdDecryptionProofs completion/error handler—look for
handle_threshold_decryption_proofs or similar) add code to call
c4_cache.remove(&e3id) (or clear the inner map) and also implement/handle an
E3RequestComplete message/hook (e.g., handle_e3_request_complete) to remove the
same entry; ensure the removal uses the E3id used to populate the cache and is
done in the same actor/context to avoid concurrency issues.
---
Duplicate comments:
In `@crates/aggregator/src/publickey_aggregator.rs`:
- Around line 353-380: The retry state is being persisted using
submission/party-id order (honest_keyshares / honest_nodes) while the
multithreaded C5 work and PkAggregationProofPending.keyshare_bytes use
OrderedSet ordering, causing mismatched blame/aggregation when orders differ;
fix by normalizing everything to the OrderedSet order: build and persist
GeneratingC5Proof.keyshare_bytes and any retry state from the OrderedSet-derived
keyshare_bytes (use the same keyshare_bytes, ks_to_proof and ks_to_node
alignment logic shown) and modify handle_c1_commitment_mismatch to interpret
incoming indices against the OrderedSet order (honest_keyshares_set /
honest_nodes_set) so that c1_signed_proofs, nodes_aligned, and persisted
keyshare_bytes all share the same ordering.
In `@crates/multithread/src/multithread.rs`:
- Around line 348-361: Validate req.keyshare_bytes length against
req.committee_h before calling PkAggInputs::compute: check that
req.keyshare_bytes.len() == req.committee_h and if not return
Err(make_zk_error(&request, format!("InvalidParams: keyshare_bytes length {} !=
committee_h {}", req.keyshare_bytes.len(), req.committee_h))). This prevents
PkAggInputs::compute from indexing pk0_shares (and similar) and panicking;
ensure the validation occurs prior to invoking PkAggInputs::compute.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d3748817-14fe-464f-8b79-31e629fae1cf
📒 Files selected for processing (3)
crates/aggregator/src/publickey_aggregator.rscrates/multithread/src/multithread.rscrates/zk-prover/src/actors/share_verification.rs
| /// Derive c1_commitments from signed proofs by extracting pk_commitment from each. | ||
| fn derive_c1_commitments(signed_proofs: &[Option<SignedProofPayload>]) -> Vec<ArcBytes> { | ||
| signed_proofs | ||
| .iter() | ||
| .enumerate() | ||
| .filter_map(|(i, opt)| { | ||
| let sp = opt.as_ref()?; | ||
| let proof = &sp.payload.proof; | ||
| tracing::info!( | ||
| "C1 proof[{}]: circuit={:?}, public_signals_len={}, signals_hex={}", | ||
| i, | ||
| proof.circuit, | ||
| proof.public_signals.len(), | ||
| proof.public_signals[..std::cmp::min(128, proof.public_signals.len())] | ||
| .iter() | ||
| .map(|b| format!("{:02x}", b)) | ||
| .collect::<String>() | ||
| ); | ||
| let commitment = proof.extract_output("pk_commitment"); | ||
| if let Some(ref c) = commitment { | ||
| tracing::info!( | ||
| "C1 proof[{}]: extracted pk_commitment={}", | ||
| i, | ||
| c.iter().map(|b| format!("{:02x}", b)).collect::<String>() | ||
| ); | ||
| } else { | ||
| tracing::warn!("C1 proof[{}]: failed to extract pk_commitment", i); | ||
| } | ||
| commitment | ||
| }) | ||
| .collect() | ||
| } |
There was a problem hiding this comment.
Don't filter_map away missing pk_commitments.
If extract_output("pk_commitment") returns None, this shortens c1_commitments and breaks the 1:1 contract with keyshare_bytes. handle_pk_aggregation_proof rejects that as InvalidParams, and this actor only logs non-mismatch request errors, so one bad extraction can wedge C5 generation instead of attributing the party.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/aggregator/src/publickey_aggregator.rs` around lines 28 - 59,
derive_c1_commitments currently uses filter_map and drops None results from
proof.extract_output("pk_commitment"), shortening c1_commitments and breaking
the 1:1 contract with keyshare_bytes; change derive_c1_commitments to preserve
indices by returning Vec<Option<ArcBytes>> (update its signature) and replace
the filter_map with a plain map that returns the extracted Option (still logging
success/failure as before), and then update any callers (e.g.,
handle_pk_aggregation_proof) to accept and handle the Vec<Option<ArcBytes>> so
missing pk_commitment values are detected and handled rather than silently
removed.
| // Check if enough honest parties remain | ||
| let remaining_count = remaining_keyshares.len(); | ||
| // We need > 0 honest parties; the circuit enforces the threshold check | ||
| if remaining_count == 0 { | ||
| return Err(anyhow::anyhow!( | ||
| "No honest parties remaining after C1 commitment mismatch filtering" | ||
| )); | ||
| } |
There was a problem hiding this comment.
Re-check the threshold after removing mismatched C1 parties.
The initial path aborts when honest_keyshares.len() <= threshold_m, but the retry path only checks remaining_count == 0. After a C1 mismatch this can continue with fewer than threshold_m + 1 parties and publish a PK for a committee that can no longer satisfy threshold decryption.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/aggregator/src/publickey_aggregator.rs` around lines 891 - 898, After
filtering out C1-mismatched parties (remaining_keyshares), re-check the
threshold using threshold_m: current code only checks remaining_count == 0,
which allows continuing with <= threshold_m parties; change the check to ensure
remaining_count > threshold_m (i.e., return an error if remaining_count <=
threshold_m) so we abort the retry path when there are not enough honest parties
to meet the decryption threshold; update the error message in the branch that
inspects remaining_keyshares to reference this threshold and include context
(e.g., mention honest_keyshares/remaining_keyshares and threshold_m) so the
failure is clear.
| /// Cached C4 return commitments per party, keyed by E3 ID. | ||
| /// Populated after C4 verification passes; consumed during C6 verification. | ||
| c4_cache: HashMap<E3id, HashMap<u64, C4Commitments>>, | ||
| } |
There was a problem hiding this comment.
Add an eviction path for c4_cache.
This actor is process-wide, but the new cache is only ever populated. Without cleanup on ThresholdDecryptionProofs completion/error (or an E3RequestComplete hook), every E3 leaves per-party commitments resident indefinitely.
Also applies to: 122-123, 419-447
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/zk-prover/src/actors/share_verification.rs` around lines 112 - 115,
The c4_cache (HashMap<E3id, HashMap<u64, C4Commitments>>) is never evicted,
leaking per-E3 data for the lifetime of the process; add an explicit eviction
path that removes the c4_cache entry when an E3 completes or fails.
Specifically, in the actor handlers that finalize an E3 (e.g., the
ThresholdDecryptionProofs completion/error handler—look for
handle_threshold_decryption_proofs or similar) add code to call
c4_cache.remove(&e3id) (or clear the inner map) and also implement/handle an
E3RequestComplete message/hook (e.g., handle_e3_request_complete) to remove the
same entry; ensure the removal uses the E3id used to populate the cache and is
done in the same actor/context to avoid concurrency issues.
| /// Cache C4 return commitments from decryption proofs for later C6 cross-check. | ||
| fn cache_c4_commitments( | ||
| &mut self, | ||
| e3_id: &E3id, | ||
| parties: &[PartyShareDecryptionProofsToVerify], | ||
| ) { | ||
| let cache = self.c4_cache.entry(e3_id.clone()).or_default(); | ||
| for party in parties { | ||
| let sk = party | ||
| .signed_sk_decryption_proof | ||
| .payload | ||
| .proof | ||
| .extract_output("commitment"); | ||
| let esm: Vec<ArcBytes> = party | ||
| .signed_e_sm_decryption_proofs | ||
| .iter() | ||
| .filter_map(|p| p.payload.proof.extract_output("commitment")) | ||
| .collect(); | ||
|
|
||
| if let Some(sk_commitment) = sk { | ||
| cache.insert( | ||
| party.sender_party_id, | ||
| C4Commitments { | ||
| sk_commitment, | ||
| e_sm_commitments: esm, | ||
| }, | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't let missing cached commitments pass the C4→C6 check.
cache_c4_commitments drops missing outputs, and check_c6_party_against_c4 returns None on cache miss, empty ESM cache, or short public_signals. In the caller, None means “keep verifying”, so the new cross-circuit binding is skipped հենց when the cached data is absent or malformed.
Also applies to: 457-493
first merge #1460
Summary by CodeRabbit
New Features
Bug Fixes