Extend performance contract with custom measurements#6282
Extend performance contract with custom measurements#6282
Conversation
|
Thank you for making this first PR |
| "common/nym-common", | ||
| "common/config", | ||
| "common/cosmwasm-smart-contracts/coconut-dkg", | ||
| "contracts/performance", |
There was a problem hiding this comment.
don't add it to the main workspace - all contracts have their own sub workspace
| let mut tester = init_contract_tester(); | ||
| let admin = tester.admin_msg(); | ||
| let new_measurement = MeasurementKind::from("new-measurement"); | ||
|
|
There was a problem hiding this comment.
nit-ish. technically do validate it does indeed require admin privileges, you have to check that it does error out from any other account
| let mut non_existent_measurement_kind = Vec::new(); | ||
|
|
||
| // 3. submit it | ||
| if self.node_bonded(deps.as_ref(), first.node_id)? { |
There was a problem hiding this comment.
this is just me thinking aloud. I wonder, would it perhaps simplify things if batch submission was always guaranteed to be for one kind at once?
|
|
||
| let key = (epoch_id, data.node_id); | ||
| let updated = match self.results.may_load(storage, key)? { | ||
| let key = (epoch_id, data.node_id, data.measurement_kind.clone()); |
There was a problem hiding this comment.
you could remove all the kind cloning all over the place if you changed your map key to pub(crate) results: Map<(EpochId, NodeId, &'static str), NodeResults>,
There was a problem hiding this comment.
I don't think you can do that: this string is user-defined at runtime, you can't store it into 'static str
There was a problem hiding this comment.
you totally can thanks to how cosmwasm contracts work : )
look at for example the NetworkMonitorsStorage. internally it uses pub(crate) authorised: Map<&'static Addr, NetworkMonitorDetails>,. those keys are also defined at runtime as new entries are being added
| .min(retrieval_limits::NODE_EPOCH_PERFORMANCE_MAX_LIMIT) as usize; | ||
|
|
||
| let start = start_after.map(Bound::exclusive); | ||
| let start = start_after.map(|node_id| Bound::exclusive((node_id + 1, String::new()))); |
There was a problem hiding this comment.
- wait, why are we skipping one node?
- if you have to provide a rather opaque value to the bound, i.e.
String::new(), perhaps the whole query signature should be changed? look at queries in other contracts, for pagination the caller is responsible for providing all elements of the subkey
There was a problem hiding this comment.
String::new() is intended to start at "" onwards for MeasurementKind. I.e. it's supposed to capture all measurement kinds under a certain node & epoch.
In other pagination cases in the contract, you can "cut" anywhere across nodes & epochs. This signature guarantees that you cannot cut across measurement kinds of a node.
There was a problem hiding this comment.
I don't see how this would be more ergonomic to paginate through
page 1
(epoch_1, node_1, measurement_1)
(epoch_1, node_1, measurement_2)
(epoch_1, node_1, measurement_3)
(epoch_1, node_2, measurement_1)
(epoch_1, node_2, measurement_2)
page 2
(epoch_1, node_2, measurement_3)
(epoch_1, node_3, measurement_1)
(epoch_1, node_3, measurement_2)
(epoch_1, node_3, measurement_3)
(epoch_1, node_4, measurement_1)
but I'm open to it if you think it's more valuable
There was a problem hiding this comment.
In other pagination cases in the contract, you can "cut" anywhere across nodes & epochs. This signature guarantees that you cannot cut across measurement kinds of a node.
I think it's perfectly fine to allow cutting across measurements kinds of a node. after all this is a paged query to get all measurements of a node and multiple calls are expected
| // because API aggregates per NodeId, and the storage doesn't, we have to | ||
| // first collect all different measurements for a node and use an | ||
| // intermediary struct to map from storage to the object returned on the API | ||
| let mut measurements_per_node: HashMap<NodeId, Vec<(MeasurementKind, NodeResults)>> = |
There was a problem hiding this comment.
I think we're just overcomplicating it here. we should just read the data as it's in the storage and let the caller sort it themselves
There was a problem hiding this comment.
storage layout is a bit clunky due to cw-storage constraints. Don't we care about API being nicer?
There was a problem hiding this comment.
what I meant (I think, it's been 3 weeks : D) is that we shouldn't bother pre-sorting values in the response (i.e. assigning it to per node). instead we should just return raw rows as they are in the storage and leave it to the caller to deal with it like we do with all other queries
| .results | ||
| .range(deps.storage, start, None, Order::Ascending) | ||
| .take(limit) | ||
| // we can't cut a pagination limit here becasue we don't want to |
There was a problem hiding this comment.
why not. just change the query parameters and let the caller resume from where it's meant to resume from
try_load_performance returns per kind load_measurement_kind & unit test Naming stale submission unit test & note
7ba3f59 to
5102df3
Compare
|
The latest updates on your projects. Learn more about Vercel for GitHub.
2 Skipped Deployments
|
New APIs
Existing APIs
To do
This change is