Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 55 minutes and 9 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (42)
📝 WalkthroughWalkthroughAdds extensive test infrastructure (unit, component, integration), many fixtures (topology, keys, configs), pytest/poe configuration, and a new GitHub Actions workflow to run the test suite; also updates CI to install the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces a comprehensive testing suite for the HexRift application, including unit, component, and integration tests, along with necessary test fixtures and configuration updates in pyproject.toml. The feedback focuses on improving test code quality by refactoring CLI test helpers to reduce duplication and ensuring consistent singleton resets, as well as cleaning up redundant inline imports and moving the yaml import to the module level for better maintainability.
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (8)
tests/integration/conftest.py (1)
15-21: Avoid duplicating the top-level singleton reset fixture.
tests/conftest.pyalready defines the same autouse reset for all tests. Keeping another fixture with the same name here shadows it for integration tests and creates maintenance drift unless integration needs different behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/conftest.py` around lines 15 - 21, The autouse fixture reset_singleton duplicates the top-level reset already defined elsewhere and shadows it for integration tests; remove this duplicate fixture (the function reset_singleton that sets BaseApplication._instance and HexRiftApp._instance to None before and after yield) from tests/integration/conftest.py, or if integration tests require different behavior, rename the fixture and adjust its scope/behavior accordingly so it no longer unintentionally overrides the global reset_singleton that clears BaseApplication._instance and HexRiftApp._instance..github/workflows/ci-tests.yaml (1)
11-24: Constrain the workflow token permissions.This test job only needs to read the repository. Add explicit least-privilege permissions so org/repo defaults can’t grant unnecessary write scopes.
Suggested workflow hardening
on: push: branches: - main pull_request: branches: - main +permissions: + contents: read + jobs: tests:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-tests.yaml around lines 11 - 24, The workflow grants default token scopes; restrict it by adding an explicit permissions block so the tests job only has repository read access — update the workflow (for the tests job identified by jobs.tests / name: 🧪 Tests) to include permissions: contents: read (either at workflow root or job level) so the GITHUB_TOKEN cannot obtain write scopes from org/repo defaults.tests/fixtures/keys/mskA00.yaml (1)
1-4: These test fixtures are confirmed as synthetic—no rotation needed.The keys are located in
tests/fixtures/keys/and referenced in test topology files (tests/fixtures/topology.yaml), following the documented key format from the architecture guide. They are test-only material.The optional improvement remains valid: consider adding a
.gitleaksallowentry for this fixture path to suppress recurrent scanner findings on test keys.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fixtures/keys/mskA00.yaml` around lines 1 - 4, These keys (entries decryption, encryption, reality_private_key, reality_public_key) are synthetic test fixtures and require no rotation; to suppress noisy scanner findings add a .gitleaksallow rule that whitelists this specific test fixture (the YAML that contains those keys) so CI scanners ignore it while keeping real keys blocked.tests/unit/keys/test_decryption.py (1)
37-41: Consider asserting segment identity, not just count.
test_without_padding_not_in_decryptiononly checks that the decryption string has 4 dot-separated parts. A regression that replaced thesession_timesegment with an unintendedpadding-like token would still produce 4 parts. Consider also assertingparts[:3] == ["mlkem768x25519plus", "native", "600s"]to pin the prefix layout.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/keys/test_decryption.py` around lines 37 - 41, Update the test_without_padding_not_in_decryption to assert the identity of the prefix segments as well as the count: after calling generate_auth_keypair(AuthMethod.MLKEM768, "native", "600s") and splitting the dec string, keep the existing len(parts) == 4 check and add an assertion that parts[:3] == ["mlkem768x25519plus", "native", "600s"] so the test verifies the method/mode/session_time fields (referencing the test function name and generate_auth_keypair/AuthMethod.MLKEM768 used to produce the value).tests/unit/schema/test_users.py (1)
49-58: Nit: useAccessType.XHTTP.valueinstead of hardcoded"xhttp".Using the string literal
"xhttp"here (and elsewhere when callingmodel_validate) couples the test to the raw enum value. If the enum literal ever changes, this test would pass a stale value. PreferAccessType.XHTTP.valuefor consistency with the rest of the suite which uses the enum member.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/schema/test_users.py` around lines 49 - 58, Replace the hardcoded string "xhttp" in the test_extra_field_forbidden test with the enum-backed value to avoid coupling to a raw literal; update the call to User.model_validate to pass AccessType.XHTTP.value for the "access" field (import or reference the AccessType enum used elsewhere in tests) so the test uses the canonical enum value instead of a magic string.tests/unit/render/test_serialize_config.py (1)
43-48: Assertion is weaker than intended.
test_long_array_not_collapsedasserts onlyb"[\n" in result, butorjsonwithOPT_INDENT_2on a dict always emits{\nfollowed by indented keys — so the serialized top-level dict itself does not produce[\n; only the array does here, which makes the test valid by accident for this specific input. Consider tightening the intent: assert the collapsed form is absent (e.g.,b'"very-long-string-value-0", "very-long-string-value-1"' not in result) so a regression that silently collapses long arrays despite the 80-char guard would fail.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/render/test_serialize_config.py` around lines 43 - 48, The test test_long_array_not_collapsed is too weak because it only checks for b"[\n" which can appear from the outer dict; instead assert that the array was not collapsed by verifying the collapsed inline form is absent: change the assertion in test_long_array_not_collapsed to assert that the collapsed substring (for example b'"very-long-string-value-0", "very-long-string-value-1"') is not in the result, or otherwise assert that the array opening bracket is directly followed by a newline inside the array (e.g., ensure b'["very-long-string-value-0", "very-long-string-value-1"' is not present), referencing the serialize_config call and the test_long_array_not_collapsed function to locate the change.tests/component/test_cli.py (1)
14-35: Duplicated singleton reset — conftest autouse fixture should already cover this.Per the AI summary,
tests/conftest.pyprovides an autouse fixture that resetsBaseApplication._instance/HexRiftApp._instancearound each test. Manually resetting them here is redundant and risks drift if the central fixture's behavior changes. Consider removing the in-function resets and relying solely on the autouse fixture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/component/test_cli.py` around lines 14 - 35, The invoke and invoke_catching helpers duplicate singleton resets that the autouse fixture in conftest already performs; remove the lines that set BaseApplication._instance = None and HexRiftApp._instance = None from both functions so they only create a CliRunner and call runner.invoke(cli, ...), relying on the conftest autouse fixture for singleton isolation (keep function names invoke and invoke_catching and their usage of CliRunner and catch_exceptions unchanged).tests/unit/derive/test_topology.py (1)
382-434: Unused_ns()helper methods.
TestGetHubCdnClients._ns(line 383-384) andTestGetHubUserShortIds._ns(line 409-410) are defined but never called — each test instead constructsNamespace("t.ns")inline. Either use the helpers for DRY or drop them.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/derive/test_topology.py` around lines 382 - 434, The _ns helper methods defined on TestGetHubCdnClients._ns and TestGetHubUserShortIds._ns are unused; remove those unused _ns methods (or alternatively update the tests to call self._ns() instead of constructing Namespace("t.ns") inline) so the helper is either used or deleted—look for the methods named _ns inside class TestGetHubCdnClients and class TestGetHubUserShortIds and either replace inline Namespace("t.ns") usages with self._ns() or delete the unused _ns definitions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@pyproject.toml`:
- Around line 29-32: The typecheck run is missing test deps, so update the Poe
"typecheck" task in pyproject.toml to ensure the "tests" dependency group is
available when running ty (e.g., change the task to install/enable the tests
group or run ty through the environment that includes the tests group) and also
change the CI step that currently runs "uv run ty check" in
.github/workflows/ci-code-quality.yaml to invoke ty with the tests dependencies
enabled (so test-only imports like pytest are present when checking
tests/conftest.py and tests/integration/conftest.py). Ensure you reference and
modify the Poe task named typecheck and the CI step that runs "uv run ty check".
In `@tests/component/test_cli.py`:
- Around line 57-61: The current test_invalid_topology_shows_error uses a weak
assertion with "or result.exit_code != 0" which lets a missing "Validation
error" message pass; update the test to make two explicit assertions: first
assert that "Validation error" is in result.output (using the result returned
from invoke_catching) and then assert that result.exit_code != 0 so both the
message content and non‑zero exit status are validated; modify the assertion in
test_invalid_topology_shows_error (around the invoke_catching call and the
`result` variable) accordingly.
In `@tests/fixtures/keys/nlA00.yaml`:
- Around line 1-4: This fixture file nlA00.yaml contains synthetic hardcoded
keys (fields: decryption, encryption, reality_private_key, reality_public_key)
that will trigger secret scanners; add an allowlist entry to the repository
scanner ignore file (e.g., .betterleaksignore) with the line
"tests/fixtures/keys/nlA00.yaml: Synthetic test fixture for regression testing;
not real credentials" and a brief comment so the scanners skip this specific
file while keeping the fixture in place for tests.
In `@tests/integration/test_full_pipeline.py`:
- Around line 154-158: The test_hub_config_structure currently calls
_build_for_node("mskA00") twice and assigns an unused generated_haproxy from the
first call; remove the duplicate call and unused binding by calling
_build_for_node("mskA00") only once and assigning its returned values to the
variables you actually use (e.g., generated, _ or _, generated_haproxy as
needed) so that generated is used to produce cfg via orjson.loads; update
variable names in test_hub_config_structure accordingly to eliminate the unused
generated_haproxy and redundant work.
- Around line 110-120: The test_haproxy_cfg_matches_committed currently reads
the expected haproxy.cfg with FIXTURE_CONFIGS_DIR / node_id /
"haproxy.cfg".read_text() and compares to generated, which can flake on Windows
due to CRLF differences; change it to mirror test_config_json_matches_committed
by reading the fixture as bytes or text and normalizing CRLFs (convert "\r\n" to
"\n") before the comparison so the output from _build_for_node and the committed
fixture compare consistently across platforms.
- Line 20: The failing type-check is caused by the top-level import of pytest in
tests/integration/test_full_pipeline.py being unresolved because pytest is only
in the tests dependency group; either add pytest and any test plugins to the dev
dependency group in pyproject.toml so the type-checker can resolve the import,
or update the ty configuration (tool.ty) to exclude the tests/ directory from
type-checking scope; locate the import line in test_full_pipeline.py (the
`import pytest` at top) and then choose one fix: add pytest to the dev group in
pyproject.toml or add an exclusions entry in tool.ty to ignore the tests/ path
so `uv run ty check` succeeds.
In `@tests/unit/keys/test_store.py`:
- Around line 12-17: The NodeKeys fixture returns realistic-looking secret
strings which trigger secret scanners; replace those with obvious non-secret
sentinel values (plain, low-entropy test strings) for the fields
reality_private_key, reality_public_key, decryption, and encryption inside the
NodeKeys instance so tests keep the same shape but no real/secret-like data is
present; ensure they remain strings (e.g., "test-reality-private-key",
"test-reality-public-key", "test-decryption-token", "test-encryption-token") and
keep the NodeKeys(...) call and field names unchanged.
In `@tests/unit/schema/test_exit_route.py`:
- Line 1: Update the Ty configuration so the test dependencies are visible
during type-checking: in the project pyproject.toml modify the
[tool.ty.environment] section to include the tests dependency group (so imports
like pytest used in tests/unit/schema/test_exit_route.py resolve), e.g. add
"tests" to the environment's dependency groups list (or ensure include contains
both "main" and "tests").
In `@tests/unit/schema/test_regions.py`:
- Line 1: The CI code-quality job fails to resolve the imported pytest in tests
(see tests/unit/schema/test_regions.py with "import pytest") because the
workflow only runs "uv sync --group dev"; update the ci-code-quality workflow
step that runs dependency sync (the command string "uv sync --group dev") to
include the tests group as well by changing it to "uv sync --group dev --group
tests" so the pytest package from the tests dependency group is installed before
running the type check.
In `@tests/unit/schema/test_root.py`:
- Around line 1-7: CI type-check fails because the test suite imports pytest but
the typing tool (`ty`) isn't resolving dev/test dependencies; fix by ensuring
`pytest` is available to `ty check`—either install the test/dev dependency group
before running type checks (e.g., run the environment sync that includes dev
groups such as `uv sync --all-groups` or install the
`dependency-groups.dev`/`test` that contains `pytest`), or configure `ty` to
include those groups (e.g., adjust ty config or invocation to include the
test/dev group). Ensure this change makes imports like `pytest` in files such as
the test module importing `pytest` resolvable by `ty check`.
---
Nitpick comments:
In @.github/workflows/ci-tests.yaml:
- Around line 11-24: The workflow grants default token scopes; restrict it by
adding an explicit permissions block so the tests job only has repository read
access — update the workflow (for the tests job identified by jobs.tests / name:
🧪 Tests) to include permissions: contents: read (either at workflow root or job
level) so the GITHUB_TOKEN cannot obtain write scopes from org/repo defaults.
In `@tests/component/test_cli.py`:
- Around line 14-35: The invoke and invoke_catching helpers duplicate singleton
resets that the autouse fixture in conftest already performs; remove the lines
that set BaseApplication._instance = None and HexRiftApp._instance = None from
both functions so they only create a CliRunner and call runner.invoke(cli, ...),
relying on the conftest autouse fixture for singleton isolation (keep function
names invoke and invoke_catching and their usage of CliRunner and
catch_exceptions unchanged).
In `@tests/fixtures/keys/mskA00.yaml`:
- Around line 1-4: These keys (entries decryption, encryption,
reality_private_key, reality_public_key) are synthetic test fixtures and require
no rotation; to suppress noisy scanner findings add a .gitleaksallow rule that
whitelists this specific test fixture (the YAML that contains those keys) so CI
scanners ignore it while keeping real keys blocked.
In `@tests/integration/conftest.py`:
- Around line 15-21: The autouse fixture reset_singleton duplicates the
top-level reset already defined elsewhere and shadows it for integration tests;
remove this duplicate fixture (the function reset_singleton that sets
BaseApplication._instance and HexRiftApp._instance to None before and after
yield) from tests/integration/conftest.py, or if integration tests require
different behavior, rename the fixture and adjust its scope/behavior accordingly
so it no longer unintentionally overrides the global reset_singleton that clears
BaseApplication._instance and HexRiftApp._instance.
In `@tests/unit/derive/test_topology.py`:
- Around line 382-434: The _ns helper methods defined on
TestGetHubCdnClients._ns and TestGetHubUserShortIds._ns are unused; remove those
unused _ns methods (or alternatively update the tests to call self._ns() instead
of constructing Namespace("t.ns") inline) so the helper is either used or
deleted—look for the methods named _ns inside class TestGetHubCdnClients and
class TestGetHubUserShortIds and either replace inline Namespace("t.ns") usages
with self._ns() or delete the unused _ns definitions.
In `@tests/unit/keys/test_decryption.py`:
- Around line 37-41: Update the test_without_padding_not_in_decryption to assert
the identity of the prefix segments as well as the count: after calling
generate_auth_keypair(AuthMethod.MLKEM768, "native", "600s") and splitting the
dec string, keep the existing len(parts) == 4 check and add an assertion that
parts[:3] == ["mlkem768x25519plus", "native", "600s"] so the test verifies the
method/mode/session_time fields (referencing the test function name and
generate_auth_keypair/AuthMethod.MLKEM768 used to produce the value).
In `@tests/unit/render/test_serialize_config.py`:
- Around line 43-48: The test test_long_array_not_collapsed is too weak because
it only checks for b"[\n" which can appear from the outer dict; instead assert
that the array was not collapsed by verifying the collapsed inline form is
absent: change the assertion in test_long_array_not_collapsed to assert that the
collapsed substring (for example b'"very-long-string-value-0",
"very-long-string-value-1"') is not in the result, or otherwise assert that the
array opening bracket is directly followed by a newline inside the array (e.g.,
ensure b'["very-long-string-value-0", "very-long-string-value-1"' is not
present), referencing the serialize_config call and the
test_long_array_not_collapsed function to locate the change.
In `@tests/unit/schema/test_users.py`:
- Around line 49-58: Replace the hardcoded string "xhttp" in the
test_extra_field_forbidden test with the enum-backed value to avoid coupling to
a raw literal; update the call to User.model_validate to pass
AccessType.XHTTP.value for the "access" field (import or reference the
AccessType enum used elsewhere in tests) so the test uses the canonical enum
value instead of a magic string.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 95ce6920-deca-41ff-980a-ccfe1843145d
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (41)
.github/workflows/ci-tests.yamlpyproject.tomltests/__init__.pytests/component/__init__.pytests/component/conftest.pytests/component/test_cli.pytests/component/test_derive_controller.pytests/component/test_keys_controller.pytests/component/test_render_controller.pytests/component/test_schema_controller.pytests/conftest.pytests/fixtures/configs/mskA00/config.jsontests/fixtures/configs/mskA00/haproxy.cfgtests/fixtures/configs/nlA00/config.jsontests/fixtures/configs/nlA00/haproxy.cfgtests/fixtures/keys/mskA00.yamltests/fixtures/keys/nlA00.yamltests/fixtures/topology.yamltests/integration/__init__.pytests/integration/conftest.pytests/integration/test_full_pipeline.pytests/unit/__init__.pytests/unit/derive/__init__.pytests/unit/derive/test_defaults.pytests/unit/derive/test_identity.pytests/unit/derive/test_topology.pytests/unit/keys/__init__.pytests/unit/keys/test_decryption.pytests/unit/keys/test_reality.pytests/unit/keys/test_store.pytests/unit/render/__init__.pytests/unit/render/test_haproxy.pytests/unit/render/test_serialize_config.pytests/unit/schema/__init__.pytests/unit/schema/test_defaults.pytests/unit/schema/test_exit_route.pytests/unit/schema/test_hub_route.pytests/unit/schema/test_regions.pytests/unit/schema/test_root.pytests/unit/schema/test_users.pytests/unit/test_model.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (6)
tests/unit/keys/test_store.py (1)
69-80: Optional: parametrize invalid-node-ID cases.The three
TestPathValidationtests differ only by input;@pytest.mark.parametrizeover["../evil", "a\\b", "a/b"](plus any other interesting cases like empty string or".") would reduce duplication and make it trivial to extend coverage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/keys/test_store.py` around lines 69 - 80, The three almost-identical tests in TestPathValidation should be combined into a single parametrized test to reduce duplication: replace the separate test_path_traversal_rejected, test_backslash_rejected, and test_slash_rejected with one test function decorated with pytest.mark.parametrize that iterates over invalid node IDs (e.g. "../evil", "a\\b", "a/b", "" , ".") and asserts node_keys_exist(tmp_path, node_id) raises ValueError with "Invalid node ID"; keep the assertion and tmp_path fixture the same and reference node_keys_exist in the new test.tests/integration/test_full_pipeline.py (3)
147-153: Assert decryption on the tagged inbound, not index zero.The tests already locate
direct-xhttp; usingcfg["inbounds"][0]makes these structural checks fail on harmless inbound reordering.Suggested assertion cleanup
- assert cfg["inbounds"][0]["settings"]["decryption"].startswith("mlkem768x25519plus.native.600s.7hyY0") + assert reality_inbound["settings"]["decryption"].startswith("mlkem768x25519plus.native.600s.7hyY0") ... - assert cfg["inbounds"][0]["settings"]["decryption"].startswith("mlkem768x25519plus.native.600s.wLmsb") + assert direct_ib["settings"]["decryption"].startswith("mlkem768x25519plus.native.600s.wLmsb")Also applies to: 166-194
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_full_pipeline.py` around lines 147 - 153, The test is asserting decryption settings against cfg["inbounds"][0] which can break when inbound order changes; use the previously located reality_inbound (found by tag "direct-xhttp") instead. Replace the assertion that checks cfg["inbounds"][0]["settings"]["decryption"] with an assertion against reality_inbound["settings"]["decryption"], and apply the same change for the similar assertions later in the file (the block referenced as lines 166-194) so all structural checks reference the tagged inbound rather than index zero.
241-246: Search for the specific direct-routing rule.This currently grabs the first direct rule with any domain, then checks for
internal.example.com. If another direct domain rule is added earlier, this test can fail for the wrong reason.Suggested predicate tightening
- direct_rule = next((r for r in rules if r.get("outboundTag") == "direct" and r.get("domain")), None) + direct_rule = next( + ( + r + for r in rules + if r.get("outboundTag") == "direct" + and "internal.example.com" in r.get("domain", []) + ), + None, + ) assert direct_rule is not None - assert "internal.example.com" in direct_rule["domain"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_full_pipeline.py` around lines 241 - 246, The test currently selects the first direct rule with any domain which can yield false failures; update the predicate used to find direct_rule so it specifically finds a rule whose outboundTag == "direct" and whose domain list/string contains "internal.example.com" (use the existing variables _build_for_node, generated, cfg, rules, direct_rule to locate the code and check r.get("domain") safely before membership testing); then assert that direct_rule is not None and that "internal.example.com" is present in that rule’s domain.
40-43: Avoid relying on implicit schema loading here too.If
SchemaController.configdoes not lazy-load,get_node()will access_configbefore it exists. Load the committed topology before reading nodes/config.Suggested helper hardening
app = HexRiftApp(yaml_path=FIXTURE_TOPOLOGY) + app.schema.load(FIXTURE_TOPOLOGY) region, node = app.schema.get_node(node_id)Use the same verification as the fixture comment to confirm whether the current controller still lazy-loads.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_full_pipeline.py` around lines 40 - 43, Call the schema's committed-topology loader before accessing nodes or config to avoid implicit lazy-load races: invoke app.schema.load_committed_topology() (or the controller's equivalent ensure/loader method) immediately after creating HexRiftApp and before calling app.schema.get_node(node_id), reading app.schema.config, or calling app.keys.load_node_keys(...); also include the same verification used in the fixture (the check used to confirm whether the controller still lazy-loads) right after the load to assert the topology is present.tests/unit/schema/test_root.py (1)
326-331: Tighten the expected validation message.This test is specifically about duplicate warp
vless_route, butmatch="Duplicate"would also pass for unrelated duplicate errors introduced by fixture drift. Prefer matching the specific validator message.Suggested test tightening
- with pytest.raises(ValidationError, match="Duplicate"): + with pytest.raises(ValidationError, match="Duplicate warp vless_route"): ConglomerateConfig.model_validate(d)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/schema/test_root.py` around lines 326 - 331, The test test_warp_vless_route_duplicate_with_exit currently uses a broad match="Duplicate"; change it to specifically assert the validator message for the warp vless_route collision: call ConglomerateConfig.model_validate(d) inside a pytest.raises(ValidationError) as excinfo, then inspect excinfo.value.errors() and assert there is an error whose 'msg' equals the validator's duplicate vless_route message and whose 'loc' points to the regions entry's warp.vless_route (e.g., contains 'regions' and 'warp'); this tightens the check to the specific duplicate warp vless_route failure instead of any "Duplicate" message.tests/integration/conftest.py (1)
14-16: The fixture is correctly structured. TheSchemaController.configproperty confirms lazy-loading: it checksif getattr(self, "_config", None) is Noneand callsself.load(self.app.yaml_path)on first access. The fixture will auto-load the topology when tests accessreal_app.schema.config.Explicitly calling
app.schema.load(FIXTURE_TOPOLOGY)remains an optional hardening measure for clarity, making the fixture's setup more explicit and independent of lazy-loading behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/conftest.py` around lines 14 - 16, The fixture real_app currently returns HexRiftApp(yaml_path=FIXTURE_TOPOLOGY) relying on SchemaController.config lazy-loading; to make setup explicit, after creating the app call app.schema.load(FIXTURE_TOPOLOGY) (or app.schema.load(app.yaml_path)) so the topology is eagerly loaded for tests; update the real_app fixture to instantiate HexRiftApp and then invoke SchemaController.load via app.schema.load(...) before returning the app.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/fixtures/keys/mskA00.yaml`:
- Around line 1-4: This fixture embeds real-looking private key material (fields
reality_private_key, reality_public_key, decryption, encryption) and must be
marked as test-only or replaced; update the mskA00.yaml fixture to either
replace those values with clearly synthetic deterministic test keys (e.g.,
"test-reality-private-key-..." / "test-reality-public-key-..." or values
produced by a test-only generator) and add an explicit top-line comment like "#
TEST-ONLY: synthetic keys for unit tests — do not use in production" or add the
repository's approved fixture allowlist marker so secret scanners ignore it,
ensuring all four symbols (decryption, encryption, reality_private_key,
reality_public_key) are handled.
---
Nitpick comments:
In `@tests/integration/conftest.py`:
- Around line 14-16: The fixture real_app currently returns
HexRiftApp(yaml_path=FIXTURE_TOPOLOGY) relying on SchemaController.config
lazy-loading; to make setup explicit, after creating the app call
app.schema.load(FIXTURE_TOPOLOGY) (or app.schema.load(app.yaml_path)) so the
topology is eagerly loaded for tests; update the real_app fixture to instantiate
HexRiftApp and then invoke SchemaController.load via app.schema.load(...) before
returning the app.
In `@tests/integration/test_full_pipeline.py`:
- Around line 147-153: The test is asserting decryption settings against
cfg["inbounds"][0] which can break when inbound order changes; use the
previously located reality_inbound (found by tag "direct-xhttp") instead.
Replace the assertion that checks cfg["inbounds"][0]["settings"]["decryption"]
with an assertion against reality_inbound["settings"]["decryption"], and apply
the same change for the similar assertions later in the file (the block
referenced as lines 166-194) so all structural checks reference the tagged
inbound rather than index zero.
- Around line 241-246: The test currently selects the first direct rule with any
domain which can yield false failures; update the predicate used to find
direct_rule so it specifically finds a rule whose outboundTag == "direct" and
whose domain list/string contains "internal.example.com" (use the existing
variables _build_for_node, generated, cfg, rules, direct_rule to locate the code
and check r.get("domain") safely before membership testing); then assert that
direct_rule is not None and that "internal.example.com" is present in that
rule’s domain.
- Around line 40-43: Call the schema's committed-topology loader before
accessing nodes or config to avoid implicit lazy-load races: invoke
app.schema.load_committed_topology() (or the controller's equivalent
ensure/loader method) immediately after creating HexRiftApp and before calling
app.schema.get_node(node_id), reading app.schema.config, or calling
app.keys.load_node_keys(...); also include the same verification used in the
fixture (the check used to confirm whether the controller still lazy-loads)
right after the load to assert the topology is present.
In `@tests/unit/keys/test_store.py`:
- Around line 69-80: The three almost-identical tests in TestPathValidation
should be combined into a single parametrized test to reduce duplication:
replace the separate test_path_traversal_rejected, test_backslash_rejected, and
test_slash_rejected with one test function decorated with
pytest.mark.parametrize that iterates over invalid node IDs (e.g. "../evil",
"a\\b", "a/b", "" , ".") and asserts node_keys_exist(tmp_path, node_id) raises
ValueError with "Invalid node ID"; keep the assertion and tmp_path fixture the
same and reference node_keys_exist in the new test.
In `@tests/unit/schema/test_root.py`:
- Around line 326-331: The test test_warp_vless_route_duplicate_with_exit
currently uses a broad match="Duplicate"; change it to specifically assert the
validator message for the warp vless_route collision: call
ConglomerateConfig.model_validate(d) inside a pytest.raises(ValidationError) as
excinfo, then inspect excinfo.value.errors() and assert there is an error whose
'msg' equals the validator's duplicate vless_route message and whose 'loc'
points to the regions entry's warp.vless_route (e.g., contains 'regions' and
'warp'); this tightens the check to the specific duplicate warp vless_route
failure instead of any "Duplicate" message.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7ba01bb5-34c2-4a48-a606-461d7565e3fc
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (42)
.github/workflows/ci-code-quality.yaml.github/workflows/ci-tests.yamlpyproject.tomltests/__init__.pytests/component/__init__.pytests/component/conftest.pytests/component/test_cli.pytests/component/test_derive_controller.pytests/component/test_keys_controller.pytests/component/test_render_controller.pytests/component/test_schema_controller.pytests/conftest.pytests/fixtures/configs/mskA00/config.jsontests/fixtures/configs/mskA00/haproxy.cfgtests/fixtures/configs/nlA00/config.jsontests/fixtures/configs/nlA00/haproxy.cfgtests/fixtures/keys/mskA00.yamltests/fixtures/keys/nlA00.yamltests/fixtures/topology.yamltests/integration/__init__.pytests/integration/conftest.pytests/integration/test_full_pipeline.pytests/unit/__init__.pytests/unit/derive/__init__.pytests/unit/derive/test_defaults.pytests/unit/derive/test_identity.pytests/unit/derive/test_topology.pytests/unit/keys/__init__.pytests/unit/keys/test_decryption.pytests/unit/keys/test_reality.pytests/unit/keys/test_store.pytests/unit/render/__init__.pytests/unit/render/test_haproxy.pytests/unit/render/test_serialize_config.pytests/unit/schema/__init__.pytests/unit/schema/test_defaults.pytests/unit/schema/test_exit_route.pytests/unit/schema/test_hub_route.pytests/unit/schema/test_regions.pytests/unit/schema/test_root.pytests/unit/schema/test_users.pytests/unit/test_model.py
✅ Files skipped from review due to trivial changes (21)
- .github/workflows/ci-code-quality.yaml
- tests/fixtures/keys/nlA00.yaml
- tests/unit/schema/test_hub_route.py
- .github/workflows/ci-tests.yaml
- tests/unit/test_model.py
- tests/unit/schema/test_exit_route.py
- tests/unit/schema/test_users.py
- tests/unit/schema/test_defaults.py
- tests/component/test_schema_controller.py
- tests/fixtures/topology.yaml
- tests/unit/keys/test_decryption.py
- tests/fixtures/configs/mskA00/haproxy.cfg
- tests/fixtures/configs/nlA00/config.json
- tests/component/test_render_controller.py
- tests/fixtures/configs/nlA00/haproxy.cfg
- tests/unit/derive/test_identity.py
- tests/component/test_cli.py
- tests/unit/derive/test_defaults.py
- tests/unit/render/test_haproxy.py
- tests/fixtures/configs/mskA00/config.json
- tests/component/test_derive_controller.py
🚧 Files skipped from review as they are similar to previous changes (8)
- pyproject.toml
- tests/unit/keys/test_reality.py
- tests/component/test_keys_controller.py
- tests/component/conftest.py
- tests/conftest.py
- tests/unit/render/test_serialize_config.py
- tests/unit/schema/test_regions.py
- tests/unit/derive/test_topology.py
There was a problem hiding this comment.
🧹 Nitpick comments (5)
tests/integration/test_full_pipeline.py (1)
37-61: Optional: cache_build_for_noderesults bynode_id.The helper is invoked ~12 times across the module with only two distinct
node_idvalues (nlA00,mskA00). Since inputs are the committed fixtures and outputs are deterministic, a simple cache cuts redundantHexRiftAppinit + render passes without changing semantics.♻️ Proposed change
from pathlib import Path +from functools import lru_cache import orjson import pytest @@ -def _build_for_node(node_id: str) -> tuple[bytes, str]: +@lru_cache(maxsize=None) +def _build_for_node(node_id: str) -> tuple[bytes, str]: """Build xray config bytes + haproxy cfg for a node using committed keys."""Previous CRLF-normalization and duplicate-build concerns are correctly addressed in the current revision.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/integration/test_full_pipeline.py` around lines 37 - 61, The helper _build_for_node is called repeatedly with only two node_id values; cache its results by node_id to avoid reinitializing HexRiftApp and repeated renders — add a simple memoization (e.g., a module-level dict cache or functools.lru_cache wrapper around _build_for_node) keyed by node_id, return cached (serialize_config(xray_config), haproxy_cfg) when present, and populate the cache on first compute; ensure the cache key is node_id and the cached value matches the current return tuple from _build_for_node.tests/component/test_derive_controller.py (2)
356-362: Minor: dedupepairs_allbefore membership check.
pairs_allcontains one entry per hub node; if two hubs yield identical URLs (region-default reality dedupe logic), theincheck still works but the comparison is noisier than needed. Not a correctness issue.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/component/test_derive_controller.py` around lines 356 - 362, In test_specific_hub_node, dedupe the URLs from pairs_all before checking membership: collect the second element (URL) from each tuple in pairs_all into a set (e.g., derived from pairs_all) and then assert that pairs_specific[0][1] is in that set; this replaces the list comprehension [u for _, u in pairs_all] with a deduplicated set to make the membership check deterministic and noise-free.
16-18: Consider module-scoping theappfixture.All derive tests that use this fixture are read-only (
derive_users,derive_groups,derive_nodes,build_share_urls) and run against the same committed topology. InstantiatingHexRiftAppper test re-parses the YAML 18+ times.♻️ Proposed change
-@pytest.fixture() -def app() -> HexRiftApp: +@pytest.fixture(scope="module") +def app() -> HexRiftApp: return HexRiftApp(yaml_path=FIXTURE_TOPOLOGY)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/component/test_derive_controller.py` around lines 16 - 18, The fixture `app` is recreated for every test causing repeated YAML parsing; change its pytest fixture scope to module (add scope="module" to the `@pytest.fixture` decorator) so the single HexRiftApp(yaml_path=FIXTURE_TOPOLOGY) instance is reused across the derive tests; keep the fixture name `app` and the return value of HexRiftApp intact so tests using derive_users, derive_groups, derive_nodes, and build_share_urls work unchanged.tests/component/test_keys_controller.py (2)
32-45: Optional: look up the exit node by id rather than positional index.
data["regions"][0]["nodes"][0]silently tracks whatever shape the sharedtopology_yamlfixture happens to have. If the fixture gains another region or reorders nodes, this mutation lands on the wrong node and the test still “passes” while exercising the wrong scenario.♻️ Proposed change
data = yaml.safe_load(topology_yaml.read_text()) - # Disable keys for exitN1 - data["regions"][0]["nodes"][0]["keys"] = {"enabled": False, "mode": "native", "session_time": "600s"} + # Disable keys for exitN1 + for region in data["regions"]: + for node in region["nodes"]: + if node["id"] == "exitN1": + node["keys"] = {"enabled": False, "mode": "native", "session_time": "600s"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/component/test_keys_controller.py` around lines 32 - 45, The test test_disabled_keys_generates_none_strings currently mutates data["regions"][0]["nodes"][0] by position; instead locate the node with id "exitN1" by iterating regions and their nodes (or using a comprehension) and set its "keys" field to {"enabled": False, "mode": "native", "session_time": "600s"} so the test targets the correct node regardless of fixture ordering; keep the rest of the flow (writing disabled_yaml, HexRiftApp instantiation, instance.keys.gen_keys and load_node_keys, and assertions) unchanged.
22-30: Optional: also assertdecryptionchanges on force re-gen.
reality_private_keyis sufficient to prove regeneration happened, but the auth keypair lives indecryption/encryptionand a regression that skipped regenerating those (but rotated reality keys) would pass this test. One extra line hardens the check.♻️ Proposed change
assert first.reality_private_key != second.reality_private_key + assert first.decryption != second.decryption🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/component/test_keys_controller.py` around lines 22 - 30, In test_force_overwrites_and_returns_true, after calling app.keys.gen_keys(..., force=True) and loading first and second via app.keys.load_node_keys, add an assertion that the auth keypair changed as well (e.g., compare the decryption/encryption auth fields from first vs second) so that a regression which only rotated reality_private_key but skipped regenerating the decryption/encryption keys will fail; update the test to assert first.decryption != second.decryption (or equivalently compare the specific decryption/encryption private key attributes) in addition to the existing reality_private_key check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/component/test_derive_controller.py`:
- Around line 356-362: In test_specific_hub_node, dedupe the URLs from pairs_all
before checking membership: collect the second element (URL) from each tuple in
pairs_all into a set (e.g., derived from pairs_all) and then assert that
pairs_specific[0][1] is in that set; this replaces the list comprehension [u for
_, u in pairs_all] with a deduplicated set to make the membership check
deterministic and noise-free.
- Around line 16-18: The fixture `app` is recreated for every test causing
repeated YAML parsing; change its pytest fixture scope to module (add
scope="module" to the `@pytest.fixture` decorator) so the single
HexRiftApp(yaml_path=FIXTURE_TOPOLOGY) instance is reused across the derive
tests; keep the fixture name `app` and the return value of HexRiftApp intact so
tests using derive_users, derive_groups, derive_nodes, and build_share_urls work
unchanged.
In `@tests/component/test_keys_controller.py`:
- Around line 32-45: The test test_disabled_keys_generates_none_strings
currently mutates data["regions"][0]["nodes"][0] by position; instead locate the
node with id "exitN1" by iterating regions and their nodes (or using a
comprehension) and set its "keys" field to {"enabled": False, "mode": "native",
"session_time": "600s"} so the test targets the correct node regardless of
fixture ordering; keep the rest of the flow (writing disabled_yaml, HexRiftApp
instantiation, instance.keys.gen_keys and load_node_keys, and assertions)
unchanged.
- Around line 22-30: In test_force_overwrites_and_returns_true, after calling
app.keys.gen_keys(..., force=True) and loading first and second via
app.keys.load_node_keys, add an assertion that the auth keypair changed as well
(e.g., compare the decryption/encryption auth fields from first vs second) so
that a regression which only rotated reality_private_key but skipped
regenerating the decryption/encryption keys will fail; update the test to assert
first.decryption != second.decryption (or equivalently compare the specific
decryption/encryption private key attributes) in addition to the existing
reality_private_key check.
In `@tests/integration/test_full_pipeline.py`:
- Around line 37-61: The helper _build_for_node is called repeatedly with only
two node_id values; cache its results by node_id to avoid reinitializing
HexRiftApp and repeated renders — add a simple memoization (e.g., a module-level
dict cache or functools.lru_cache wrapper around _build_for_node) keyed by
node_id, return cached (serialize_config(xray_config), haproxy_cfg) when
present, and populate the cache on first compute; ensure the cache key is
node_id and the cached value matches the current return tuple from
_build_for_node.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9e4f0a6b-69d7-45af-9487-4294c7e43941
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (42)
.github/workflows/ci-code-quality.yaml.github/workflows/ci-tests.yamlpyproject.tomltests/__init__.pytests/component/__init__.pytests/component/conftest.pytests/component/test_cli.pytests/component/test_derive_controller.pytests/component/test_keys_controller.pytests/component/test_render_controller.pytests/component/test_schema_controller.pytests/conftest.pytests/fixtures/configs/mskA00/config.jsontests/fixtures/configs/mskA00/haproxy.cfgtests/fixtures/configs/nlA00/config.jsontests/fixtures/configs/nlA00/haproxy.cfgtests/fixtures/keys/mskA00.yamltests/fixtures/keys/nlA00.yamltests/fixtures/topology.yamltests/integration/__init__.pytests/integration/conftest.pytests/integration/test_full_pipeline.pytests/unit/__init__.pytests/unit/derive/__init__.pytests/unit/derive/test_defaults.pytests/unit/derive/test_identity.pytests/unit/derive/test_topology.pytests/unit/keys/__init__.pytests/unit/keys/test_decryption.pytests/unit/keys/test_reality.pytests/unit/keys/test_store.pytests/unit/render/__init__.pytests/unit/render/test_haproxy.pytests/unit/render/test_serialize_config.pytests/unit/schema/__init__.pytests/unit/schema/test_defaults.pytests/unit/schema/test_exit_route.pytests/unit/schema/test_hub_route.pytests/unit/schema/test_regions.pytests/unit/schema/test_root.pytests/unit/schema/test_users.pytests/unit/test_model.py
✅ Files skipped from review due to trivial changes (21)
- .github/workflows/ci-code-quality.yaml
- tests/fixtures/keys/mskA00.yaml
- tests/conftest.py
- .github/workflows/ci-tests.yaml
- tests/unit/schema/test_exit_route.py
- tests/component/conftest.py
- tests/unit/keys/test_reality.py
- tests/unit/render/test_serialize_config.py
- tests/unit/test_model.py
- tests/unit/schema/test_defaults.py
- tests/fixtures/topology.yaml
- tests/unit/schema/test_regions.py
- tests/fixtures/configs/nlA00/haproxy.cfg
- tests/unit/render/test_haproxy.py
- tests/unit/keys/test_store.py
- tests/unit/derive/test_identity.py
- tests/unit/schema/test_root.py
- tests/fixtures/configs/mskA00/config.json
- tests/unit/keys/test_decryption.py
- tests/unit/schema/test_hub_route.py
- tests/unit/derive/test_topology.py
🚧 Files skipped from review as they are similar to previous changes (6)
- tests/integration/conftest.py
- pyproject.toml
- tests/fixtures/configs/mskA00/haproxy.cfg
- tests/component/test_schema_controller.py
- tests/component/test_render_controller.py
- tests/component/test_cli.py
Pull request type
Description
Why is this change needed?
Related Issues
Testing
Checklist
uv run ruff check .anduv run ty check)Version Bumping
__init__.pySummary by CodeRabbit
Tests
Chores