feat: Add M015 Contribution-Weighted Rewards mechanism#26
feat: Add M015 Contribution-Weighted Rewards mechanism#26CShear wants to merge 2 commits intoregen-network:mainfrom
Conversation
…e impl, schemas, and fixtures Full standalone mechanism directory for M015 including: - SPEC.md: 13-section specification (scoring, stability tier, state machine, attack model) - reference-impl/m015_score.js: canonical scoring, stability allocation, and distribution - reference-impl/test_vectors/: deterministic input/expected fixtures (4 participants) - schemas/: JSON schemas for activity scores, stability commitments, and KPI output - datasets/fixtures/: v0_sample (single epoch) and v0_stability_sample (5 scenarios) - README.md: mechanism overview with cross-references - package.json: add "type": "module" for ESM imports - README.md mechanism index regenerated Self-test and repo verify both pass. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary of ChangesHello @CShear, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces the M015 Contribution-Weighted Rewards mechanism, fundamentally shifting the network's reward system from passive staking to an activity-based model. It incentivizes participants for their ecological and governance contributions, ensuring rewards align with value creation within the ecosystem. Additionally, it establishes a stability tier to offer predictable returns for long-term token holders, balancing active participation with holder stability. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces the M015 Contribution-Weighted Rewards mechanism, a significant new feature. The submission is comprehensive, including a detailed specification, a reference implementation, JSON schemas, and test fixtures. The code is well-structured and includes a self-test harness. However, I have identified a critical design flaw in the core scoring formula which disproportionately favors monetary activities over governance participation. Additionally, there is a schema mismatch in a test fixture, which should align with project data standards, and a minor code improvement opportunity. Addressing these points will be crucial for the mechanism's fairness and the overall quality of the codebase.
| activity_score[p] = ( | ||
| credit_purchase_value[p] * 0.30 + | ||
| credit_retirement_value[p] * 0.30 + | ||
| platform_facilitation_value[p] * 0.20 + | ||
| governance_votes_cast[p] * 0.10 + | ||
| proposal_credit * 0.10 | ||
| ) |
There was a problem hiding this comment.
The current scoring formula directly combines monetary values (e.g., credit_purchase_value in uregen) with simple counts (e.g., governance_votes_cast). This creates a significant imbalance where monetary activities can completely dominate the score, rendering governance participation almost meaningless. For example, a single credit purchase of 1,000,000 uregen contributes 300,000 to the score, while casting 10 votes only contributes 1. To ensure a more balanced reward system that properly incentivizes all desired activities, consider normalizing these different types of inputs before applying weights. For instance, monetary values could be log-scaled, or all activity values could be normalized to a common range (e.g., 0-1) based on their distribution within the epoch.
| @@ -0,0 +1,113 @@ | |||
| { | |||
| "$schema": "../../schemas/m015_kpi.schema.json", | |||
There was a problem hiding this comment.
The JSON schema referenced here, m015_kpi.schema.json, does not match the structure of this fixture file. For example, this file contains a participants array which is not defined in the schema, and it lacks required schema properties like mechanism_id, scope, and as_of. To ensure data integrity and clarity, please either update this fixture to match the schema, or create and reference a new schema that accurately describes this data structure.
References
- When defining data structures or type definitions, ensure they are aligned with the project's central data standards repository (
regen-data-standards) to maintain coherence.
| * @param {Array<{passed: boolean, reached_quorum: boolean}>} opts.activities.proposals - proposals submitted | ||
| * @returns {{ total_score: number, breakdown: Object }} | ||
| */ | ||
| export function computeActivityScore({ participant, activities }) { |
… param - SPEC.md: add section 5.4 documenting deliberate scoring unit imbalance (monetary values dominate governance counts by design; v1 may normalize) - v0_sample.json: remove incorrect $schema ref to m015_kpi.schema.json (this is a replay dataset, not a KPI output) - m015_score.js: remove unused participant param from computeActivityScore Fixes Gemini review on PR regen-network#26. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
mechanisms/m015-contribution-weighted-rewards/directory with full mechanism specification, reference implementation, JSON schemas, and deterministic test fixturesWhat's included
SPEC.mdREADME.mdreference-impl/m015_score.jscomputeActivityScore,computeStabilityAllocation,computeDistribution+ self-test harnessreference-impl/test_vectors/schemas/datasets/fixtures/v0_sample.jsondatasets/fixtures/v0_stability_sample.jsonpackage.json"type": "module"for ESM importsREADME.md(root)Verification
node mechanisms/m015-contribution-weighted-rewards/reference-impl/m015_score.js→ PASSnode scripts/build-mechanism-index.mjs --check→ OKnode scripts/verify.mjs→ PASSTest plan
node mechanisms/m015-contribution-weighted-rewards/reference-impl/m015_score.jsnpm run verifynpm run check:indexphase-2/2.6-economic-reboot-mechanisms.mdM015 section for consistencyv0_stability_sample.jsonscenario 3 (cap hit)🤖 Generated with Claude Code