[Enhancement]: replace functionalities and added minimal features#53
[Enhancement]: replace functionalities and added minimal features#53anuragShingare30 wants to merge 2 commits intoStabilityNexus:mainfrom
Conversation
WalkthroughAdds detailed local/LAN testing documentation; implements automatic background mining and CLI controls; introduces chain exchange and atomic replacement with full validation; enhances mempool inspection/restoration APIs; adds per-peer ready-handshake and chain messaging in P2P; and adds coinbase handling in state. Changes
Sequence Diagram(s)sequenceDiagram
participant Node as Local Node
participant Mempool as Mempool
participant Chain as Blockchain
participant Network as P2P Network
participant Peers as Connected Peers
loop Auto-Mining Loop (enabled)
Node->>Node: Create coinbase tx
Node->>Mempool: Request pending transactions
Mempool-->>Node: Return pending txs
Node->>Chain: Mine block (txs + coinbase)
Chain-->>Node: Mined block
Node->>Network: Broadcast block
Network->>Peers: Send block to peers
end
sequenceDiagram
participant Initiator as Initiator Node
participant Network as P2P Network
participant Peer as Remote Peer
participant Chain as Blockchain
participant State as State
Initiator->>Network: request_chain()
Network->>Peer: send get_chain
Peer->>Network: send_chain(chain_data)
Network-->>Initiator: deliver chain_data
Initiator->>Chain: validate_chain(chain_data)
Chain->>State: validate/apply txs on temp state
State-->>Chain: validation result
alt Chain valid and longer
Initiator->>Chain: replace_chain(chain_data)
Chain->>State: rebuild state
State-->>Chain: state rebuilt
else invalid or shorter
Initiator->>Initiator: retain local chain
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
main.py (1)
141-142:⚠️ Potential issue | 🟠 MajorUnconditional mempool drain loses unrelated pending transactions.
When a valid block is received from the network, Line 142 calls
mempool.get_transactions_for_block()which drains ALL pending transactions, not just the ones included in the received block. This means legitimate transactions that weren't in the received block are silently discarded.The mempool should only remove transactions that were actually included in the received block.
🛠️ Suggested fix: Remove only matching transactions
Consider adding a method to
Mempoolthat removes specific transactions by their IDs:- # Drain matching txs from mempool so they aren't re-mined - mempool.get_transactions_for_block() + # Remove only transactions that were included in the received block + # TODO: Add mempool.remove_transactions(block.transactions) method + # For now, this drains all - needs proper fix + mempool.get_transactions_for_block()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main.py` around lines 141 - 142, The current call to mempool.get_transactions_for_block() unconditionally drains all pending transactions; instead remove only the transactions present in the received block: update the code that handles an incoming valid block to collect the tx IDs from the block and call a new Mempool method (e.g., Mempool.remove_transactions_by_ids or Mempool.remove_txns) that accepts a list/set of transaction IDs, and implement that method in the Mempool class to find and delete only those matching transactions rather than clearing the entire mempool; ensure the handler uses the block's transaction identifiers (not full objects) and that Mempool.remove_transactions_by_ids is referenced where mempool.get_transactions_for_block() was called.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@COMMAND.md`:
- Around line 9-12: Update the documented entry point and flag names to match
the actual code: replace references to "cli.py" with "main.py" and replace the
"--peers" flag with "--connect" (e.g., the example commands and all other
occurrences noted around lines 19, 22, 42, 75, 81); ensure every example and
usage string in COMMAND.md consistently uses main.py and --connect so they match
the implemented CLI entry point and flag names.
- Around line 40-42: Replace the hardcoded developer-specific path and the
outdated script name: change the `cd ~/web2/minichain` line to a
project-agnostic placeholder or relative invocation (e.g. `cd /path/to/minichain
# adjust to your project directory` or use a git-root helper like `cd $(git
rev-parse --show-toplevel)`), keep `source .venv/bin/activate` as-is, and update
the command `python3 cli.py --port 9000 --mine` to run the current entrypoint
`python3 main.py --port 9000 --mine` so the instructions work across
environments and reference the correct script names.
- Around line 1-3: Add a top-level heading to the file and normalize spacing
around headings and code blocks: insert a single H1 (e.g., "# Commands for
testing") at the very top, ensure there is a blank line before and after each
heading (including "### Test 1: Same Machine, Two Terminals"), and add blank
lines surrounding any fenced code blocks so markdown renders consistently and
complies with common markdown parsers.
In `@main.py`:
- Around line 392-396: The fixed code should stop relying on an arbitrary sleep
and instead ensure the peer is ready before sending the chain sync: modify or
extend network.connect_to_peer to only resolve when the connection
handshake/ready-ack is received (or add a new awaitable like
network.wait_for_peer_ready(peer_id)), and then call network.request_chain once
that readiness promise completes; if changing connect_to_peer is not possible,
implement a short retry/backoff loop that calls network.request_chain and
retries on a transient failure or "not ready" response until success or a
configured timeout. Reference the functions network.connect_to_peer and
network.request_chain when making these changes.
- Around line 196-213: The mining failure path currently re-adds drained
transactions with mempool.add_transaction after get_transactions_for_block
removed their IDs from _seen_tx_ids, which can silently drop txs if
add_transaction fails or the pool fills; change this by (a) avoiding draining
until mining is likely by calling mempool.get_pending_transactions() to verify
enough/valid txs before creating the Block, and (b) implement and call a
dedicated mempool.restore_transactions(transactions) method that re-inserts txs
while bypassing duplicate/seen-ID checks but still enforces capacity limits (or
returns failures so the caller can retry/log), replacing the current loop that
uses add_transaction in the except block and referencing
get_transactions_for_block, add_transaction, _seen_tx_ids,
get_pending_transactions, and restore_transactions.
In `@minichain/chain.py`:
- Around line 209-248: The code redundantly re-validates transactions in
replace_chain after validate_chain already verified them; update validate_chain
(or create a new helper) to return the validated State (e.g., return the State
instance used during validation or a tuple like (True, validated_state)) and
then in replace_chain use that returned validated_state instead of re-applying
validate_and_apply for each Transaction when reconstructing new_chain (remove or
skip the for tx in transactions validate_and_apply loop), keeping genesis_block
creation and block reconstruction but using the precomputed state to avoid
double work; ensure calls and signatures for validate_chain and replace_chain
are updated consistently.
- Around line 117-125: The genesis validation currently checks
genesis.get("hash") and genesis.get("index") but misses verifying
genesis.get("previous_hash"); update the validation in the block that handles
genesis (variable genesis in validate_chain or similar) to also assert
genesis.get("previous_hash") == "0" (consistent with _create_genesis_block) and
if not, call logger.warning("Chain validation failed: Genesis previous_hash not
'0'") and return False; ensure you reference GENESIS_HASH, genesis, and
_create_genesis_block to keep the checks consistent.
- Around line 201-207: The method currently returns True when an incoming chain
(chain_data) is equal in length and valid but not actually replacing self.chain,
which breaks the docstring "True if chain was replaced"; change the behavior so
that after validating equal-length chains via validate_chain(chain_data) it
returns False (or another explicit non-replaced sentinel) instead of True, and
update the method's docstring to document the new return semantics ("True if
chain was replaced, False if not replaced; valid-but-not-replaced chains return
False"). Ensure references to self.chain and chain_data remain unchanged and
only the return and docstring are adjusted.
In `@minichain/p2p.py`:
- Around line 171-190: In send_chain and send_to_peer replace logger.error(...)
in the except blocks with logger.exception(...) so the exception stack trace is
logged; update the two except clauses inside the async methods send_chain(self,
writer, chain_data) and send_to_peer(self, writer, payload) to call
logger.exception with the same descriptive message (e.g., "Network: Failed to
send chain: %s" or "Network: Failed to send to peer: %s") so you preserve
context while capturing the traceback for better diagnostics.
In `@minichain/state.py`:
- Around line 176-187: Remove the duplicated method definitions by deleting the
second occurrences of update_contract_storage_partial and credit_mining_reward
so the class only contains the original implementations; locate the duplicate
blocks named update_contract_storage_partial and credit_mining_reward (the
redefined methods) and remove them, leaving the first definitions intact, then
run tests/lint to ensure no references depended on the duplicated copies.
---
Outside diff comments:
In `@main.py`:
- Around line 141-142: The current call to mempool.get_transactions_for_block()
unconditionally drains all pending transactions; instead remove only the
transactions present in the received block: update the code that handles an
incoming valid block to collect the tx IDs from the block and call a new Mempool
method (e.g., Mempool.remove_transactions_by_ids or Mempool.remove_txns) that
accepts a list/set of transaction IDs, and implement that method in the Mempool
class to find and delete only those matching transactions rather than clearing
the entire mempool; ensure the handler uses the block's transaction identifiers
(not full objects) and that Mempool.remove_transactions_by_ids is referenced
where mempool.get_transactions_for_block() was called.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 167c0146-0fef-40d9-b19c-65271d5912fa
📒 Files selected for processing (6)
COMMAND.mdmain.pyminichain/chain.pyminichain/mempool.pyminichain/p2p.pyminichain/state.py
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
main.py (1)
209-220:⚠️ Potential issue | 🟠 MajorRestore drained transactions when
chain.add_block(...)returnsFalse.At Line 211, failed block acceptance currently skips mempool restoration unless an exception is thrown. That still loses drained txs on non-exception rejection paths.
♻️ Proposed fix
try: mined_block = mine_block(block, difficulty=4, timeout_seconds=5) if chain.add_block(mined_block): logger.info("⛏️ Mined block #%d! Hash: %s...", mined_block.index, mined_block.hash[:16]) await network.broadcast_block(mined_block) + else: + logger.warning("Auto-mine produced a block rejected by local chain; restoring drained txs") + if pending_txs: + result = mempool.restore_transactions(pending_txs) + if result.get("dropped"): + logger.warning("Mempool restore dropped %d txs", result["dropped"]) except Exception: # Mining timeout or error - return transactions to mempool if pending_txs: result = mempool.restore_transactions(pending_txs) if result.get("dropped"): logger.warning("Mempool restore dropped %d txs", result["dropped"])🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@main.py` around lines 209 - 220, The current flow only restores drained transactions on exceptions, so when chain.add_block(mined_block) returns False those pending_txs are lost; change the logic after calling chain.add_block(mined_block) in the try block to handle the False case by calling mempool.restore_transactions(pending_txs) and logging any dropped count (use logger.warning like the exception path), and only broadcast the block via network.broadcast_block(mined_block) when add_block returns True; ensure you don't double-restore in the except handler by keeping restore in the add_block False branch and leaving the except handler for true exceptions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@COMMAND.md`:
- Around line 1-4: Remove the duplicated section heading "## Commands for
testing" so the document only contains a single top-level title and one section
heading; specifically delete the repeated "## Commands for testing" line (the
duplicate heading in the provided diff) to eliminate redundancy and leave just
the original "# Commands for testing" and any unique section content beneath it.
In `@minichain/chain.py`:
- Around line 139-146: The validate_chain function currently returns a bare
False on some early failure paths which is incompatible with how replace_chain
unpacks its result; update the early returns inside validate_chain (e.g., the
index check and previous_hash check branches shown) to return a consistent tuple
like (False, None) so every failure path returns the same two-value structure
expected by replace_chain.
---
Duplicate comments:
In `@main.py`:
- Around line 209-220: The current flow only restores drained transactions on
exceptions, so when chain.add_block(mined_block) returns False those pending_txs
are lost; change the logic after calling chain.add_block(mined_block) in the try
block to handle the False case by calling
mempool.restore_transactions(pending_txs) and logging any dropped count (use
logger.warning like the exception path), and only broadcast the block via
network.broadcast_block(mined_block) when add_block returns True; ensure you
don't double-restore in the except handler by keeping restore in the add_block
False branch and leaving the except handler for true exceptions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 901eabd1-3d36-4c44-bd66-5612756de5df
📒 Files selected for processing (6)
COMMAND.mdmain.pyminichain/chain.pyminichain/mempool.pyminichain/p2p.pyminichain/state.py
Summary
These PR introduces minor helper additions and replacements that were missing from current implementation.
GENESIS_HASHconstant,heightproperty andadd_block(),coinbase tx bypass logic,get_pending_transsactions()andsizemethodsChain Synchronization Protocol,Chain Sync Messagingand updated theUser Interface UpdatesCOMMAND.mdChecklist
AI Usage Disclosure
Check one of the checkboxes below:
I have used the following AI models and tools: TODO
We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact. AI slop is strongly discouraged and may lead to banning and blocking. Do not spam our repos with AI slop.
Summary by CodeRabbit
New Features
Improvements
Documentation