Refactor store mappings from per-validator to per-attestation-data keying#253
Refactor store mappings from per-validator to per-attestation-data keying#253pablodeymo wants to merge 2 commits intomainfrom
Conversation
…ying Aligns the storage architecture with the leanSpec Python reference implementation. Three fundamental shifts: 1. Key inversion: SignatureKey = (validator_id, data_root) removed. PayloadBuffer now groups proofs by attestation data (via data_root), storing one proof entry that covers all participants instead of N duplicated entries per validator. 2. AttestationDataByRoot table eliminated. AttestationData is stored directly alongside proofs in the in-memory PayloadBuffer and gossip signature map. 3. Proof selection uses participation bits (proof.participants[validator_id]) instead of HashMap key membership. Additional changes: - Gossip signatures moved from RocksDB to in-memory (transient data consumed at interval 2 aggregation) - RocksDB tables reduced from 8 to 6 (removed GossipSignatures and AttestationDataByRoot) - StoredSignature, StoredAggregatedPayload, and SignatureKey types removed - Added Hash derive to AttestationData and Checkpoint for use as map keys
🤖 Kimi Code ReviewOverall Assessment: This is a well-structured refactoring that moves from per-validator key storage to content-addressed attestation data grouping, reducing storage overhead and simplifying the fork choice data flow. The in-memory gossip signature buffer is appropriate for transient data. Issues Found: 1. Missing validation of
Automated review by Kimi (Moonshot AI) · kimi-k2.5 · custom prompt |
🤖 Claude Code ReviewNow I have a complete picture of all the changes. Let me write the review. PR Review: Refactor store mappings from per-validator to per-attestation-data keyingThe motivation is sound and the alignment with leanSpec is a real improvement. The core changes are well-structured. A few findings below. Correctness
let mut result = self.known_payloads.lock().unwrap().extract_latest_attestations();
// lock released here ← window opens
for (vid, data) in self.new_payloads.lock().unwrap().extract_latest_attestations() {Between the two acquisitions,
fn drain(&mut self) -> Vec<(H256, AttestationData, AggregatedSignatureProof)> {
self.order.clear();
self.data.drain()
.flat_map(|(root, entry)| {
// HashMap::drain() is unordered
fn drain(&mut self) -> Vec<...> {
let ordered: Vec<_> = self.order.drain(..)
.filter_map(|root| self.data.remove(&root).map(|entry| (root, entry)))
.flat_map(|(root, entry)| entry.proofs.into_iter().map(move |proof| (root, entry.data.clone(), proof)))
.collect();
ordered
}
for &(vid, data_root) in keys {
if let Some(entry) = gossip.get_mut(&data_root) {
entry.signatures.retain(|e| e.validator_id != vid);
if entry.signatures.is_empty() {
gossip.remove(&data_root); // ← safe under NLL, but only if 'entry' borrow ends
}
}
}This compiles under NLL because Performance
buf.data
.iter()
.map(|(root, entry)| (*root, (entry.data.clone(), entry.proofs.clone())))
.collect()
Minor issuesRedundant bounds check in vid < p.participants.len() && p.participants.get(vid).unwrap_or(false)The explicit Metric semantic change not reflected in name
The name What's good
Automated review by Claude (Anthropic) · sonnet · custom prompt |
🤖 Codex Code Review
I did not run the test suite successfully here: Automated review by OpenAI Codex · gpt-5.4 · custom prompt |
Greptile SummaryThis PR is a well-scoped storage refactor that eliminates the N-per-validator duplication of aggregated proofs in favour of a per-attestation-data model, aligning the codebase with the leanSpec reference. Two RocksDB tables ( Key changes:
Confidence Score: 5/5Safe to merge — no P0/P1 issues found; all remaining findings are P2 style/improvement suggestions The refactor is logically correct: FIFO eviction still bounds memory, the greedy set-cover cannot select the same proof twice (proven by the No files require special attention; crates/storage/src/store.rs contains the three P2 suggestions but no blocking issues.
|
| Filename | Overview |
|---|---|
| crates/storage/src/store.rs | Core refactor: PayloadBuffer redesigned to HashMap+VecDeque, gossip signatures moved in-memory, all insertion/extraction APIs updated to data_root-keyed model |
| crates/blockchain/src/store.rs | Consumer updates: build_block now checks proof.participants bitfields, select_aggregated_proofs uses proper greedy set-cover over all candidates, aggregate_committee_signatures stores one proof per data not per validator |
| crates/storage/src/types.rs | Deleted entirely: StoredAggregatedPayload and StoredSignature types removed as proofs are now stored directly and gossip is in-memory |
| crates/storage/src/api/tables.rs | Removed GossipSignatures and AttestationDataByRoot table variants; ALL_TABLES array reduced from 8 to 6 |
| crates/common/types/src/attestation.rs | Added PartialEq, Eq, Hash derives to AttestationData to support use as HashMap key; derives are consistent with each other |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[Gossip Attestation\non_gossip_attestation] -->|insert_gossip_signature\ndata_root + att_data + sig| G[(GossipSignatureMap\nin-memory\nHashMap H256 → GossipDataEntry)]
B[Gossip Aggregated\non_gossip_aggregated_attestation] -->|insert_new_aggregated_payload\ndata_root + att_data + proof| NP[(new_payloads\nPayloadBuffer\nHashMap H256 → PayloadEntry)]
C[Block Received\non_block_core] -->|insert_known_aggregated_payloads_batch\ndata_root + att_data + proof| KP[(known_payloads\nPayloadBuffer\nHashMap H256 → PayloadEntry)]
G -->|iter_gossip_signatures snapshot| AGG[aggregate_committee_signatures\nInterval 2]
AGG -->|1 proof per data_root| NP
AGG -->|delete processed keys| G
NP -->|promote_new_aggregated_payloads\nInterval 4| KP
KP -->|known_aggregated_payloads| PB[produce_block_with_signatures]
KP -->|extract_latest_known_attestations| PB
NP -->|extract_latest_new_attestations| US[update_safe_target\nInterval 3]
KP -->|extract_latest_known_attestations| US
PB -->|build_block\ncheck proof.participants bitfield| BB[Block with Attestations\n+ AggregatedSignatureProofs]
Comments Outside Diff (2)
-
crates/storage/src/store.rs, line 688-699 (link)drain()loses insertion order on promotionHashMap::drain()iterates in an arbitrary (non-deterministic) order. Whenpromote_new_aggregated_payloadsdrainsnew_payloadsand then pushes each entry intoknown_payloadsviapush_batch, the order in which entries land inknown_payloads.order(the FIFO eviction deque) is determined by the HashMap's internal iteration rather than the original insertion sequence. This means eviction fromknown_payloadsmight remove entries that were actually inserted after some of the newly promoted ones.For correctness this is acceptable — the cap still bounds memory — but it is a silent deviation from the true FIFO guarantee described in the doc-comment. If strict FIFO is required across promotions, the
orderdeque could be drained and iterated in order before clearingdata:fn drain(&mut self) -> Vec<(H256, AttestationData, AggregatedSignatureProof)> { // Drain in insertion order by following `order` self.order .drain(..) .filter_map(|root| self.data.remove(&root)) .flat_map(|entry| { let data = entry.data; entry.proofs.into_iter().map(move |proof| (root, data.clone(), proof)) }) .collect() }
(The above sketch captures the intent; adapt as needed.)
Prompt To Fix With AI
This is a comment left during a code review. Path: crates/storage/src/store.rs Line: 688-699 Comment: **`drain()` loses insertion order on promotion** `HashMap::drain()` iterates in an arbitrary (non-deterministic) order. When `promote_new_aggregated_payloads` drains `new_payloads` and then pushes each entry into `known_payloads` via `push_batch`, the order in which entries land in `known_payloads.order` (the FIFO eviction deque) is determined by the HashMap's internal iteration rather than the original insertion sequence. This means eviction from `known_payloads` might remove entries that were actually inserted *after* some of the newly promoted ones. For correctness this is acceptable — the cap still bounds memory — but it is a silent deviation from the true FIFO guarantee described in the doc-comment. If strict FIFO is required across promotions, the `order` deque could be drained and iterated in order before clearing `data`: ```rust fn drain(&mut self) -> Vec<(H256, AttestationData, AggregatedSignatureProof)> { // Drain in insertion order by following `order` self.order .drain(..) .filter_map(|root| self.data.remove(&root)) .flat_map(|entry| { let data = entry.data; entry.proofs.into_iter().map(move |proof| (root, data.clone(), proof)) }) .collect() } ``` (The above sketch captures the intent; adapt as needed.) How can I resolve this? If you propose a fix, please make it concise.
-
crates/storage/src/store.rs, line 643-661 (link)No deduplication of identical proofs on
pushPayloadBuffer::pushfor an already-knowndata_rootunconditionally appends the proof without checking whether the sameAggregatedSignatureProof(same participants + proof bytes) was already stored. If the same gossip aggregated attestation is relayed by multiple peers (a common P2P scenario), each call toon_gossip_aggregated_attestation→insert_new_aggregated_payloadpushes an identical proof object, growing theproofsvector with duplicates.The greedy set-cover in
select_aggregated_proofsand thehas_proofcheck inbuild_blockare both correct in the presence of duplicates, but unnecessary work is done iterating redundant entries on each block production cycle.Consider a lightweight guard — checking that no existing proof already covers the same participant set before pushing:
if let Some(entry) = self.data.get_mut(&data_root) { // Skip if an identical proof is already stored if !entry.proofs.iter().any(|p| p.participants == proof.participants) { entry.proofs.push(proof); } } else { ... }
Prompt To Fix With AI
This is a comment left during a code review. Path: crates/storage/src/store.rs Line: 643-661 Comment: **No deduplication of identical proofs on `push`** `PayloadBuffer::push` for an already-known `data_root` unconditionally appends the proof without checking whether the same `AggregatedSignatureProof` (same participants + proof bytes) was already stored. If the same gossip aggregated attestation is relayed by multiple peers (a common P2P scenario), each call to `on_gossip_aggregated_attestation` → `insert_new_aggregated_payload` pushes an identical proof object, growing the `proofs` vector with duplicates. The greedy set-cover in `select_aggregated_proofs` and the `has_proof` check in `build_block` are both correct in the presence of duplicates, but unnecessary work is done iterating redundant entries on each block production cycle. Consider a lightweight guard — checking that no existing proof already covers the same participant set before pushing: ```rust if let Some(entry) = self.data.get_mut(&data_root) { // Skip if an identical proof is already stored if !entry.proofs.iter().any(|p| p.participants == proof.participants) { entry.proofs.push(proof); } } else { ... } ``` How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: crates/storage/src/store.rs
Line: 688-699
Comment:
**`drain()` loses insertion order on promotion**
`HashMap::drain()` iterates in an arbitrary (non-deterministic) order. When `promote_new_aggregated_payloads` drains `new_payloads` and then pushes each entry into `known_payloads` via `push_batch`, the order in which entries land in `known_payloads.order` (the FIFO eviction deque) is determined by the HashMap's internal iteration rather than the original insertion sequence. This means eviction from `known_payloads` might remove entries that were actually inserted *after* some of the newly promoted ones.
For correctness this is acceptable — the cap still bounds memory — but it is a silent deviation from the true FIFO guarantee described in the doc-comment. If strict FIFO is required across promotions, the `order` deque could be drained and iterated in order before clearing `data`:
```rust
fn drain(&mut self) -> Vec<(H256, AttestationData, AggregatedSignatureProof)> {
// Drain in insertion order by following `order`
self.order
.drain(..)
.filter_map(|root| self.data.remove(&root))
.flat_map(|entry| {
let data = entry.data;
entry.proofs.into_iter().map(move |proof| (root, data.clone(), proof))
})
.collect()
}
```
(The above sketch captures the intent; adapt as needed.)
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: crates/storage/src/store.rs
Line: 643-661
Comment:
**No deduplication of identical proofs on `push`**
`PayloadBuffer::push` for an already-known `data_root` unconditionally appends the proof without checking whether the same `AggregatedSignatureProof` (same participants + proof bytes) was already stored. If the same gossip aggregated attestation is relayed by multiple peers (a common P2P scenario), each call to `on_gossip_aggregated_attestation` → `insert_new_aggregated_payload` pushes an identical proof object, growing the `proofs` vector with duplicates.
The greedy set-cover in `select_aggregated_proofs` and the `has_proof` check in `build_block` are both correct in the presence of duplicates, but unnecessary work is done iterating redundant entries on each block production cycle.
Consider a lightweight guard — checking that no existing proof already covers the same participant set before pushing:
```rust
if let Some(entry) = self.data.get_mut(&data_root) {
// Skip if an identical proof is already stored
if !entry.proofs.iter().any(|p| p.participants == proof.participants) {
entry.proofs.push(proof);
}
} else {
...
}
```
How can I resolve this? If you propose a fix, please make it concise.
---
This is a comment left during a code review.
Path: crates/storage/src/store.rs
Line: 993-1017
Comment:
**Redundant `data_root` parameter — silent invariant required by callers**
`insert_gossip_signature` (and the analogous `insert_new_aggregated_payload` / `insert_known_aggregated_payload`) accept both `data_root: H256` and `att_data: AttestationData` as separate parameters, implicitly requiring callers to ensure `data_root == att_data.tree_hash_root()`. All current call sites correctly pre-compute `data_root` for reuse, but the API itself does not enforce this invariant.
A caller passing a mismatched pair would silently store `att_data` under the wrong key, corrupting later lookups. Since the `data_root` is always computable from `att_data`, accepting only `att_data` (and computing the root internally) would eliminate the risk:
```rust
pub fn insert_gossip_signature(
&mut self,
att_data: AttestationData,
validator_id: u64,
signature: ValidatorSignature,
) {
let data_root = att_data.tree_hash_root();
...
}
```
The same pattern applies to `insert_new_aggregated_payload` and `insert_known_aggregated_payload`. Callers that already hold a pre-computed root could pass it through `PayloadBuffer::push` if the root computation is a measured hot path, but given that `tree_hash_root` is deterministic and cheap for `AttestationData`, the simplification is likely worth it.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "Refactor store mappings from per-validat..." | Re-trigger Greptile
| /// Stores a gossip signature for later aggregation. | ||
| pub fn insert_gossip_signature( | ||
| &mut self, | ||
| data_root: H256, | ||
| slot: u64, | ||
| att_data: AttestationData, | ||
| validator_id: u64, | ||
| signature: ValidatorSignature, | ||
| ) { | ||
| let key = (validator_id, data_root); | ||
| let encoded_key = encode_signature_key(&key); | ||
|
|
||
| // Check if key already exists to avoid inflating the counter on upsert | ||
| let is_new = self | ||
| .backend | ||
| .begin_read() | ||
| .expect("read view") | ||
| .get(Table::GossipSignatures, &encoded_key) | ||
| .expect("get") | ||
| .is_none(); | ||
|
|
||
| let stored = StoredSignature::new(slot, signature); | ||
| let mut batch = self.backend.begin_write().expect("write batch"); | ||
| let entries = vec![(encoded_key, stored.as_ssz_bytes())]; | ||
| batch | ||
| .put_batch(Table::GossipSignatures, entries) | ||
| .expect("put signature"); | ||
| batch.commit().expect("commit"); | ||
|
|
||
| if is_new { | ||
| self.gossip_signatures_count.fetch_add(1, Ordering::Relaxed); | ||
| let mut gossip = self.gossip_signatures.lock().unwrap(); | ||
| let entry = gossip.entry(data_root).or_insert_with(|| GossipDataEntry { | ||
| data: att_data, | ||
| signatures: Vec::new(), | ||
| }); | ||
| // Avoid duplicates for same validator | ||
| if !entry | ||
| .signatures | ||
| .iter() | ||
| .any(|e| e.validator_id == validator_id) | ||
| { | ||
| entry.signatures.push(GossipSignatureEntry { | ||
| validator_id, | ||
| signature, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Redundant
data_root parameter — silent invariant required by callers
insert_gossip_signature (and the analogous insert_new_aggregated_payload / insert_known_aggregated_payload) accept both data_root: H256 and att_data: AttestationData as separate parameters, implicitly requiring callers to ensure data_root == att_data.tree_hash_root(). All current call sites correctly pre-compute data_root for reuse, but the API itself does not enforce this invariant.
A caller passing a mismatched pair would silently store att_data under the wrong key, corrupting later lookups. Since the data_root is always computable from att_data, accepting only att_data (and computing the root internally) would eliminate the risk:
pub fn insert_gossip_signature(
&mut self,
att_data: AttestationData,
validator_id: u64,
signature: ValidatorSignature,
) {
let data_root = att_data.tree_hash_root();
...
}The same pattern applies to insert_new_aggregated_payload and insert_known_aggregated_payload. Callers that already hold a pre-computed root could pass it through PayloadBuffer::push if the root computation is a measured hot path, but given that tree_hash_root is deterministic and cheap for AttestationData, the simplification is likely worth it.
Prompt To Fix With AI
This is a comment left during a code review.
Path: crates/storage/src/store.rs
Line: 993-1017
Comment:
**Redundant `data_root` parameter — silent invariant required by callers**
`insert_gossip_signature` (and the analogous `insert_new_aggregated_payload` / `insert_known_aggregated_payload`) accept both `data_root: H256` and `att_data: AttestationData` as separate parameters, implicitly requiring callers to ensure `data_root == att_data.tree_hash_root()`. All current call sites correctly pre-compute `data_root` for reuse, but the API itself does not enforce this invariant.
A caller passing a mismatched pair would silently store `att_data` under the wrong key, corrupting later lookups. Since the `data_root` is always computable from `att_data`, accepting only `att_data` (and computing the root internally) would eliminate the risk:
```rust
pub fn insert_gossip_signature(
&mut self,
att_data: AttestationData,
validator_id: u64,
signature: ValidatorSignature,
) {
let data_root = att_data.tree_hash_root();
...
}
```
The same pattern applies to `insert_new_aggregated_payload` and `insert_known_aggregated_payload`. Callers that already hold a pre-computed root could pass it through `PayloadBuffer::push` if the root computation is a measured hot path, but given that `tree_hash_root` is deterministic and cheap for `AttestationData`, the simplification is likely worth it.
How can I resolve this? If you propose a fix, please make it concise.RocksDB requires all existing column families to be listed when opening a database. Since this branch removed GossipSignatures and AttestationDataByRoot tables, nodes upgrading from a previous version would crash on startup with "You have to open all column families". Open with deprecated CFs included in descriptors, then drop them immediately after opening.
Motivation
The storage layer used
SignatureKey = (validator_id, data_root)as the primary key for both gossip signatures and aggregated payloads. This meant that a single aggregated proof covering 9 validators produced 9 duplicated entries (one per validator) — each storing a clone of the same ~3KB proof. Additionally, reconstructing per-validator attestation maps required a reverse-lookup table (AttestationDataByRoot) to go fromdata_root → AttestationData.The leanSpec Python reference uses
AttestationDatadirectly as dict keys, groups proofs per attestation message, and usesproof.participantsbitfields to determine which validators are covered. This PR aligns ethlambda with the leanSpec architecture.Description
Three fundamental shifts
1. Key inversion: per-validator → per-attestation-data
The
PayloadBufferis redesigned from a flatVecDeque<(SignatureKey, StoredAggregatedPayload)>to aHashMap<H256, PayloadEntry> + VecDeque<H256>structure that groups proofs by attestation data while preserving FIFO eviction order. Each distinct attestation message stores the fullAttestationDataplus allAggregatedSignatureProofs covering that message.2. AttestationDataByRoot table eliminated
The old scheme needed
AttestationDataByRoot(RocksDB table mappingH256 → AttestationData) becauseSignatureKeyonly stored the hash. The new scheme storesAttestationDatadirectly alongside proofs, so the reverse-lookup is no longer needed.3. Proof selection uses participation bits
extract_latest_attestationsnow iterates proofs' participation bits directly instead of relying on key membership, matching the leanSpecextract_attestations_from_aggregated_payloadsmethod.Gossip signatures moved to in-memory
Gossip signatures are transient — collected via P2P and consumed at interval 2 aggregation (~every 4 seconds). Persisting them to RocksDB was unnecessary overhead. The new in-memory structure matches the leanSpec:
RocksDB tables reduced from 8 to 6
GossipSignaturesAttestationDataByRootTypes removed
SignatureKey = (u64, H256)H256(data_root) keyingStoredAggregatedPayloadAggregatedSignatureProofStoredSignatureGossipSignatureEntrycrates/storage/src/types.rsTrait derives added
AttestationData: addedPartialEq,Eq,Hash(needed for use as map key in leanSpec-aligned API)Checkpoint: addedHash(nested inAttestationData)Changes by file
crates/storage/src/store.rs(core of the refactor)New types:
PayloadEntry { data: AttestationData, proofs: Vec<AggregatedSignatureProof> }— grouped proof storageGossipSignatureEntry { validator_id: u64, signature: ValidatorSignature }— individual gossip sigGossipDataEntry { data: AttestationData, signatures: Vec<GossipSignatureEntry> }— grouped gossip storageGossipSignatureMap = HashMap<H256, GossipDataEntry>— type alias for the gossip mapGossipSignatureSnapshot— public type alias for the snapshot returned byiter_gossip_signaturesPayloadBuffer redesign:
data: HashMap<H256, PayloadEntry>— grouped proofs by data_rootorder: VecDeque<H256>— insertion order for FIFO evictionpush()— appends proof to existing entry or creates new entry (with FIFO eviction)drain()— flattens entries for promotion to known bufferextract_latest_attestations()— derives per-validator attestation map from participation bits (replaces the old key-membership + AttestationDataByRoot lookup)Store API changes:
insert_attestation_data_by_root,insert_attestation_data_by_root_batch,get_attestation_data_by_rootextract_latest_attestations(keys: Iterator<Item = SignatureKey>)iter_known_aggregated_payloads,iter_known_aggregated_payload_keys,iter_new_aggregated_payloads,iter_new_aggregated_payload_keysencode_signature_key,decode_signature_key,count_gossip_signatures,prune_by_slot,prune_attestation_data_by_rootextract_latest_known_attestations(),extract_latest_new_attestations(),extract_latest_all_attestations()known_aggregated_payloads()— returnsHashMap<H256, (AttestationData, Vec<AggregatedSignatureProof>)>insert_*_aggregated_payload*methods now take(data_root, att_data, proof)instead of(SignatureKey, StoredAggregatedPayload)insert_gossip_signaturenow takesAttestationDatadirectly instead ofslotdelete_gossip_signaturesnow takes&[(u64, H256)]instead of&[SignatureKey]iter_gossip_signaturesreturnsGossipSignatureSnapshot(grouped by attestation data)prune_gossip_signaturesusesHashMap::retain(in-memory) instead of RocksDB iterationgossip_signatures_countcounts in-memory entries instead of usingAtomicUsizeCapacity constants adjusted:
AGGREGATED_PAYLOAD_CAP: 4096 → 512 (now counts distinct messages, not per-validator entries)NEW_PAYLOAD_CAP: 512 → 64 (same reason)crates/blockchain/src/store.rs(consumer updates)update_safe_target: replaced key-set merge +extract_latest_attestationswith singleextract_latest_all_attestations()callaggregate_committee_signatures: iterates pre-grouped gossip signatures (no more manual grouping by data_root + AttestationDataByRoot lookup); stores one proof per attestation data instead of N per validatoron_gossip_attestation: passesAttestationDatatoinsert_gossip_signature(no moreinsert_attestation_data_by_root)on_gossip_aggregated_attestation: stores one proof entry instead of N per-validator entrieson_block_core: stores one proof per attestation in known buffer; counts participating validators for metricsproduce_block_with_signatures: derives attestation map from already-cloned payloads (avoids redundant lock + iteration); uses newknown_aggregated_payloads()APIbuild_block: pre-computestree_hash_rootper attestation outside fixed-point loop; checksproof.participants[validator_id]instead ofHashMap::contains_keyselect_aggregated_proofs: looks up proofs bydata_rootwith greedy set-cover over all candidates (instead of per-validator lookup)aggregate_attestations_by_data: now reusesaggregation_bits_from_validator_indicesinstead of duplicating bitfield constructioncrates/storage/src/api/tables.rsGossipSignaturesandAttestationDataByRootvariantsALL_TABLESarray: 8 → 6 entriescrates/storage/src/backend/rocksdb.rscrates/storage/src/lib.rsSignatureKey,StoredAggregatedPayload,StoredSignaturemod typesdeclarationcrates/common/types/src/attestation.rsPartialEq, Eq, Hashderives toAttestationDatacrates/common/types/src/checkpoint.rsHashderive toCheckpointcrates/blockchain/tests/forkchoice_spectests.rsextract_latest_known_attestations()/extract_latest_new_attestations()APIStorage efficiency improvement
With 9 validators all attesting to the same data:
How to Test
Note on existing RocksDB data
Nodes upgrading from a previous version will have orphaned
gossip_signaturesandattestation_data_by_rootcolumn families in their RocksDB database. These are inert (never read or written) and can be cleaned up in a future migration, or will be ignored by RocksDB if the column families are not opened. No data loss or startup failure is expected.