Skip to content

Add m014 Authority Validator Governance mechanism spec#24

Open
CShear wants to merge 2 commits intoregen-network:mainfrom
CShear:feat/m014-poa-governance
Open

Add m014 Authority Validator Governance mechanism spec#24
CShear wants to merge 2 commits intoregen-network:mainfrom
CShear:feat/m014-poa-governance

Conversation

@CShear
Copy link
Contributor

@CShear CShear commented Feb 18, 2026

Summary

  • Adds full mechanism spec for m014 (Authority Validator Governance) — transition from PoS to PoA with curated, compensated validator set
  • 3 composition categories: infrastructure builders, trusted ReFi partners, ecological data stewards (min 5 each)
  • 3-factor performance scoring: uptime (0.4), governance participation (0.3), ecosystem contribution (0.3)
  • Validator lifecycle: CANDIDATE → APPROVED → ACTIVE → PROBATION → REMOVED / TERM_EXPIRED
  • Fee-based compensation via M013 validator fund: equal base share + 10% performance bonus pool
  • Includes SPEC.md (13 sections), JSON schemas, reference implementation with self-test, replay datasets with PoS→PoA transition scenarios
  • Updates MECHANISM_CONSUMERS.md with m014 consumer mappings

Test plan

  • node mechanisms/m014-authority-validator-governance/reference-impl/m014_score.js — self-test PASS (5 vectors, probation flags verified)
  • npm run verify — PASS
  • m010 regression — PASS
  • Review composition criteria against WG consensus
  • Validate compensation model against M013 fee projections

🤖 Generated with Claude Code

Adds full mechanism spec for m014 following the established pattern:
- SPEC.md with 13 sections covering PoS-to-PoA transition, curated validator
  set with 3 composition categories (infrastructure builders, trusted ReFi
  partners, ecological data stewards), and fee-based compensation
- 3-factor performance scoring: uptime (0.4), governance participation (0.3),
  ecosystem contribution (0.3) with probation flags
- Validator lifecycle: CANDIDATE → APPROVED → ACTIVE → PROBATION → REMOVED
- Compensation model: equal base share + 10% performance bonus pool from M013
- JSON schemas for validator lifecycle, performance score, and KPI metrics
- Reference implementation with self-test (5 test vectors across categories)
- Replay datasets including PoS→PoA transition scenarios
- Updates MECHANISM_CONSUMERS.md with m014 consumer mappings

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @CShear, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces the foundational specification and supporting artifacts for mechanism m014, which outlines a strategic shift in the network's validator model. The change aims to foster a more robust and mission-aligned validator set by moving away from capital-weighted security to a system that rewards active contribution and performance. This new framework defines how validators will be selected, evaluated, compensated, and managed throughout their lifecycle, ensuring network stability and incentivizing ecological impact.

Highlights

  • Transition to Proof of Authority (PoA): Introduced a comprehensive mechanism specification (m014) for transitioning the network's validator governance from Proof of Stake (PoS) to a curated, compensated Proof of Authority (PoA) model, aiming to address security vulnerabilities, misaligned incentives, and value disconnection.
  • Validator Performance Scoring: Defined a 3-factor weighted performance scoring system for validators, based on uptime (0.4), governance participation (0.3), and ecosystem contribution (0.3), with thresholds for review and probation.
  • Curated Validator Set & Lifecycle: Established a validator composition requirement with minimums across three categories (infrastructure builders, trusted ReFi partners, ecological data stewards) and a detailed lifecycle state machine (CANDIDATE, APPROVED, ACTIVE, PROBATION, REMOVED, TERM_EXPIRED).
  • Fee-Based Compensation Model: Implemented a compensation model sourced from the M013 validator fund, comprising an equal base share for all active validators and a 10% performance bonus pool distributed proportionally to performance scores.
  • Reference Implementation and Data: Provided a canonical reference implementation in JavaScript for computing validator scores and Key Performance Indicator (KPI) metrics, along with JSON schemas for outputs and deterministic replay datasets for testing PoS-to-PoA transition scenarios.
Changelog
  • README.md
    • Added a link to the new m014 Authority Validator Governance mechanism in the main README index.
  • docs/MECHANISM_CONSUMERS.md
    • Appended a new section detailing m014, including its canonical spec, output schemas, replay datasets, and known consumers like AGENT-004 and M013 integration.
  • mechanisms/m014-authority-validator-governance/README.md
    • Added a new README file providing an overview of m014, its outputs, what it doesn't cover in v0, how to reference its components, and details on replay datasets and schemas.
  • mechanisms/m014-authority-validator-governance/SPEC.md
    • Added the core specification document for m014, detailing the problem statement, target actors, signal definition, evidence inputs, scoring function, validator lifecycle, economic linkage, on-chain vs. off-chain boundaries, attack model, governance parameters, security invariants, and open questions.
  • mechanisms/m014-authority-validator-governance/datasets/README.md
    • Added a new README file explaining the purpose and usage of the deterministic replay datasets for m014, including how they are used to compute KPIs without the full MCP.
  • mechanisms/m014-authority-validator-governance/datasets/fixtures/v0_sample.json
    • Added a new JSON fixture file containing a sample of 5 active authority validators with varied performance profiles for m014 KPI computation.
  • mechanisms/m014-authority-validator-governance/datasets/fixtures/v0_transition_sample.json
    • Added a new JSON fixture file simulating a PoS-to-PoA transition snapshot, featuring validators in various lifecycle states (active, probation, candidate, removed, term_expired).
  • mechanisms/m014-authority-validator-governance/datasets/schema.json
    • Added a new JSON schema defining the structure for m014 replay datasets, including validator details and performance factors.
  • mechanisms/m014-authority-validator-governance/reference-impl/README.md
    • Added a new README file for the m014 reference implementation, outlining the inputs and outputs for score and KPI computations, and instructions for self-testing.
  • mechanisms/m014-authority-validator-governance/reference-impl/m014_kpi.js
    • Added a new JavaScript file containing the computeM014KPI function, which calculates aggregate KPI metrics for the authority validator set based on provided validator data.
  • mechanisms/m014-authority-validator-governance/reference-impl/m014_score.js
    • Added a new JavaScript file containing the computeM014Score function, which calculates a composite performance score for individual validators, along with a self-test harness.
  • mechanisms/m014-authority-validator-governance/reference-impl/test_vectors/vector_v0_sample.expected.json
    • Added a new JSON file containing the expected output for the m014 score self-test, based on the sample input.
  • mechanisms/m014-authority-validator-governance/reference-impl/test_vectors/vector_v0_sample.input.json
    • Added a new JSON file containing sample input data for the m014 score self-test, featuring various validator performance profiles.
  • mechanisms/m014-authority-validator-governance/schemas/README.md
    • Added a new README file describing the canonical JSON schemas for m014 outputs, including validator, performance, and KPI schemas.
  • mechanisms/m014-authority-validator-governance/schemas/m014_kpi.schema.json
    • Added a new JSON schema defining the structure for m014 KPI output blocks, including validator counts, performance averages, and compensation details.
  • mechanisms/m014-authority-validator-governance/schemas/m014_performance.schema.json
    • Added a new JSON schema defining the structure for m014 performance score outputs, including composite score, confidence, factor breakdown, and flags.
  • mechanisms/m014-authority-validator-governance/schemas/m014_validator.schema.json
    • Added a new JSON schema defining the structure for an individual m014 authority validator item, including address, category, status, and term details.
  • package.json
    • Updated the package configuration to specify 'type': 'module', enabling ES module syntax in JavaScript files.
Activity
  • The pull request author, CShear, has provided a detailed summary and test plan in the pull request description.
  • The author has confirmed that the m014_score.js self-test passes with 5 vectors and verified probation flags.
  • The author has confirmed that npm run verify passes.
  • The author has confirmed that m010 regression passes.
  • The author has noted remaining items for review: composition criteria against WG consensus and compensation model validation against M013 fee projections.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the "m014 Authority Validator Governance" mechanism, with comprehensive changes including specification, JSON schemas, replay datasets, and a reference implementation. The review identified a critical security concern in the reference implementation regarding an unconditionally called selfTest() function, which could lead to Denial of Service. Furthermore, inconsistencies were found in the specification's compensation formulas, requiring explicit definition to prevent ambiguity, and the validator schema's structure, which needs alignment with project data standards. A bug in KPI calculation and redundancy in the scoring script were also noted.

validators_by_category.ecological_data_stewards >= 5;

// Byzantine tolerance: active_count > 3f + 1
const active_count = validators_by_status.active;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The active_count is calculated using only validators with status: "active", excluding those on probation. This is inconsistent with activeVals which correctly includes validators on probation for performance scoring. Validators on probation are still part of the active set and should be included in calculations for Byzantine tolerance and compensation. This bug can lead to incorrect KPI reporting and compensation logic. active_count should be based on the length of activeVals.

Suggested change
const active_count = validators_by_status.active;
const active_count = activeVals.length;

if (fail > 0) process.exit(1);
}

selfTest();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The "selfTest()" function is called unconditionally at the top level of the module. This is a security concern as it can lead to Denial of Service (DoS) if test dependencies are missing or if tests fail, causing the application to crash or terminate the Node.js process. This practice also introduces unexpected behavior, output, and unnecessary performance overhead on every module import.

Suggested change
selfTest();
// selfTest(); // Remove or guard this call

Comment on lines +71 to +73
base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count / period
performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) / period
total_compensation = base_compensation + performance_bonus
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The compensation allocation formulas include a / period term at the end. This is ambiguous and inconsistent with the reference implementation in m014_kpi.js. If validator_fund_balance represents the total funds available for the period, dividing by period again is incorrect. Suggest removing / period for clarity and to align with the implementation.

Suggested change
base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count / period
performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores) / period
total_compensation = base_compensation + performance_bonus
base_compensation_per_validator = (validator_fund_balance * 0.90) / active_validator_count
performance_bonus_per_validator = (validator_fund_balance * 0.10) * (validator_score / total_scores)
total_compensation = base_compensation + performance_bonus
References
  1. When specifying transition logic between different mechanisms (e.g., multipliers), explicitly define the function (e.g., max()) and the state-based conditions (phase-gating) that govern the transition to prevent implementation ambiguity.

Comment on lines +64 to +71
const performance_score = totalWeight > 0
? Number((weightedSum / totalWeight * 1.0).toFixed(4))
: 0.0;

// Normalize to original weight basis when all factors present
const normalizedScore = availableCount === FACTOR_KEYS.length
? Number(weightedSum.toFixed(4))
: performance_score;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic to calculate performance_score and then normalizedScore is redundant and confusing. When all factors are present, totalWeight is 1, and performance_score is already the same as weightedSum. The conditional assignment to normalizedScore is unnecessary. This can be simplified for better readability and maintainability.

  const performance_score = totalWeight > 0
    ? weightedSum / totalWeight
    : 0.0;

  // When all factors are present, this is the direct weighted sum.
  // When some are missing, it's re-normalized based on available weights.
  const normalizedScore = Number(performance_score.toFixed(4));

Comment on lines +55 to +78
"performance": {
"type": "object",
"additionalProperties": false,
"description": "Most recent performance evaluation",
"properties": {
"uptime": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "blocks_signed / blocks_expected"
},
"governance_participation": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "votes_cast / proposals_available"
},
"ecosystem_contribution": {
"type": "number",
"minimum": 0.0,
"maximum": 1.0,
"description": "AGENT-004 assessed contribution score"
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This schema defines a validator structure that is inconsistent with datasets/schema.json and the fixture data.

  1. It places performance factors under an optional performance object, whereas other schemas use a factors object at the top level.
  2. It defines the factor values (e.g., uptime) as type: "number", not allowing null. This contradicts the replay dataset schema and the reference implementation, which correctly handle null values for missing factors.

Aligning these schemas would improve consistency and reduce confusion.

References
  1. When defining data structures or type definitions, ensure they are aligned with the project's central data standards repository (regen-data-standards) to maintain coherence.

…tion formula, schema nulls

- KPI active_count now includes probation validators (they're in active set)
- Guard selfTest() with process.argv check to prevent DoS on module import
- Remove ambiguous `/ period` from compensation formula in SPEC.md
- Allow null for performance factor types in validator schema
- Simplify redundant score normalization logic

Fixes Gemini review on PR regen-network#24.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant