Add m014 Authority Validator Governance mechanism spec#24
Add m014 Authority Validator Governance mechanism spec#24CShear wants to merge 2 commits intoregen-network:mainfrom
Conversation
Adds full mechanism spec for m014 following the established pattern: - SPEC.md with 13 sections covering PoS-to-PoA transition, curated validator set with 3 composition categories (infrastructure builders, trusted ReFi partners, ecological data stewards), and fee-based compensation - 3-factor performance scoring: uptime (0.4), governance participation (0.3), ecosystem contribution (0.3) with probation flags - Validator lifecycle: CANDIDATE → APPROVED → ACTIVE → PROBATION → REMOVED - Compensation model: equal base share + 10% performance bonus pool from M013 - JSON schemas for validator lifecycle, performance score, and KPI metrics - Reference implementation with self-test (5 test vectors across categories) - Replay datasets including PoS→PoA transition scenarios - Updates MECHANISM_CONSUMERS.md with m014 consumer mappings Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello @CShear, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the foundational specification and supporting artifacts for mechanism m014, which outlines a strategic shift in the network's validator model. The change aims to foster a more robust and mission-aligned validator set by moving away from capital-weighted security to a system that rewards active contribution and performance. This new framework defines how validators will be selected, evaluated, compensated, and managed throughout their lifecycle, ensuring network stability and incentivizing ecological impact. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces the "m014 Authority Validator Governance" mechanism, with comprehensive changes including specification, JSON schemas, replay datasets, and a reference implementation. The review identified a critical security concern in the reference implementation regarding an unconditionally called selfTest() function, which could lead to Denial of Service. Furthermore, inconsistencies were found in the specification's compensation formulas, requiring explicit definition to prevent ambiguity, and the validator schema's structure, which needs alignment with project data standards. A bug in KPI calculation and redundancy in the scoring script were also noted.
| validators_by_category.ecological_data_stewards >= 5; | ||
|
|
||
| // Byzantine tolerance: active_count > 3f + 1 | ||
| const active_count = validators_by_status.active; |
There was a problem hiding this comment.
The active_count is calculated using only validators with status: "active", excluding those on probation. This is inconsistent with activeVals which correctly includes validators on probation for performance scoring. Validators on probation are still part of the active set and should be included in calculations for Byzantine tolerance and compensation. This bug can lead to incorrect KPI reporting and compensation logic. active_count should be based on the length of activeVals.
| const active_count = validators_by_status.active; | |
| const active_count = activeVals.length; |
| if (fail > 0) process.exit(1); | ||
| } | ||
|
|
||
| selfTest(); |
There was a problem hiding this comment.
The "selfTest()" function is called unconditionally at the top level of the module. This is a security concern as it can lead to Denial of Service (DoS) if test dependencies are missing or if tests fail, causing the application to crash or terminate the Node.js process. This practice also introduces unexpected behavior, output, and unnecessary performance overhead on every module import.
| selfTest(); | |
| // selfTest(); // Remove or guard this call |
| base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count / period | ||
| performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) / period | ||
| total_compensation = base_compensation + performance_bonus |
There was a problem hiding this comment.
The compensation allocation formulas include a / period term at the end. This is ambiguous and inconsistent with the reference implementation in m014_kpi.js. If validator_fund_balance represents the total funds available for the period, dividing by period again is incorrect. Suggest removing / period for clarity and to align with the implementation.
| base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count / period | |
| performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) / period | |
| total_compensation = base_compensation + performance_bonus | |
| base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count | |
| performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) | |
| total_compensation = base_compensation + performance_bonus |
References
- When specifying transition logic between different mechanisms (e.g., multipliers), explicitly define the function (e.g., max()) and the state-based conditions (phase-gating) that govern the transition to prevent implementation ambiguity.
| const performance_score = totalWeight > 0 | ||
| ? Number((weightedSum / totalWeight * 1.0).toFixed(4)) | ||
| : 0.0; | ||
|
|
||
| // Normalize to original weight basis when all factors present | ||
| const normalizedScore = availableCount === FACTOR_KEYS.length | ||
| ? Number(weightedSum.toFixed(4)) | ||
| : performance_score; |
There was a problem hiding this comment.
The logic to calculate performance_score and then normalizedScore is redundant and confusing. When all factors are present, totalWeight is 1, and performance_score is already the same as weightedSum. The conditional assignment to normalizedScore is unnecessary. This can be simplified for better readability and maintainability.
const performance_score = totalWeight > 0
? weightedSum / totalWeight
: 0.0;
// When all factors are present, this is the direct weighted sum.
// When some are missing, it's re-normalized based on available weights.
const normalizedScore = Number(performance_score.toFixed(4));| "performance": { | ||
| "type": "object", | ||
| "additionalProperties": false, | ||
| "description": "Most recent performance evaluation", | ||
| "properties": { | ||
| "uptime": { | ||
| "type": "number", | ||
| "minimum": 0.0, | ||
| "maximum": 1.0, | ||
| "description": "blocks_signed / blocks_expected" | ||
| }, | ||
| "governance_participation": { | ||
| "type": "number", | ||
| "minimum": 0.0, | ||
| "maximum": 1.0, | ||
| "description": "votes_cast / proposals_available" | ||
| }, | ||
| "ecosystem_contribution": { | ||
| "type": "number", | ||
| "minimum": 0.0, | ||
| "maximum": 1.0, | ||
| "description": "AGENT-004 assessed contribution score" | ||
| } | ||
| } |
There was a problem hiding this comment.
This schema defines a validator structure that is inconsistent with datasets/schema.json and the fixture data.
- It places performance factors under an optional
performanceobject, whereas other schemas use afactorsobject at the top level. - It defines the factor values (e.g.,
uptime) astype: "number", not allowingnull. This contradicts the replay dataset schema and the reference implementation, which correctly handlenullvalues for missing factors.
Aligning these schemas would improve consistency and reduce confusion.
References
- When defining data structures or type definitions, ensure they are aligned with the project's central data standards repository (regen-data-standards) to maintain coherence.
…tion formula, schema nulls - KPI active_count now includes probation validators (they're in active set) - Guard selfTest() with process.argv check to prevent DoS on module import - Remove ambiguous `/ period` from compensation formula in SPEC.md - Allow null for performance factor types in validator schema - Simplify redundant score normalization logic Fixes Gemini review on PR regen-network#24. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
Test plan
node mechanisms/m014-authority-validator-governance/reference-impl/m014_score.js— self-test PASS (5 vectors, probation flags verified)npm run verify— PASS🤖 Generated with Claude Code