Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR extracts a generic ChangesGeneric Deleter Template Extraction
Path Validity and Cache Invalidation Refinements
HTTP Binary Cache Server
Sequence DiagramsequenceDiagram
participant Client
participant NixServe
participant Store
participant Disk
Client->>NixServe: GET /nix-cache-info
NixServe->>Store: Query store metadata
Store-->>NixServe: Return store dir, priority
NixServe-->>Client: 200 nix-cache-info response
Client->>NixServe: GET /<hash>.narinfo
NixServe->>Store: queryPathInfo(storePath)
alt Path exists
Store-->>NixServe: Return NarInfo
NixServe-->>Client: 200 narinfo response
else Path missing
Store-->>NixServe: InvalidPath/SubstituteGone
NixServe-->>Client: 404 Not Found
end
Client->>NixServe: GET /nar/<hashPart>-<nar32>.nar
NixServe->>NixServe: Validate expected NAR hash
NixServe->>Store: dumpPath(storePath)
loop Stream NAR chunks
Store->>Disk: Read NAR data
Disk-->>Store: Return chunk
Store-->>NixServe: Callback with chunk
NixServe-->>Client: Stream chunk
end
NixServe-->>Client: Complete 200 response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/libstore/filetransfer.cc (1)
506-513:⚠️ Potential issue | 🟠 Major | ⚡ Quick winFlip the
requestRangecheck here.The surrounding comment says transparent decompression must be disabled while resuming, but Line 510 currently does the opposite. As written, fresh downloads never advertise
Accept-Encoding, while resumed downloads do.Suggested fix
- if (requestRange && !request.data) + if (!requestRange && !request.data) /* Empty string means to enable all supported (that libcurl has been linked to support) encodings. */ curl_easy_setopt(req, CURLOPT_ACCEPT_ENCODING, "");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/libstore/filetransfer.cc` around lines 506 - 513, The condition controlling CURLOPT_ACCEPT_ENCODING is inverted: enable transparent decompression only for fresh downloads (not when resuming or when sending data). Change the check around curl_easy_setopt(req, CURLOPT_ACCEPT_ENCODING, "") to require that requestRange is false and request.data is false (i.e. if (!requestRange && !request.data)) so CURLOPT_ACCEPT_ENCODING is set for normal downloads but skipped for uploads and resumed byte-range requests; update the surrounding comment if needed to match the corrected logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/libstore/filetransfer.cc`:
- Around line 185-194: The current prefix-discard logic that trims 'data' based
on bytesReceived/writtenToSink corrupts resumed downloads when the server
returns HTTP 206; modify the block that adjusts 'data' (using symbols
bytesReceived, prevReceived, writtenToSink, data) to skip this trimming when the
response is a 206 Partial Content. Retrieve the HTTP status for the current
transfer (e.g., via curl_easy_getinfo(CURLINFO_RESPONSE_CODE, &response_code) or
the existing response status variable) before performing the discard, and only
apply the prevReceived/writtenToSink substring logic when the status is not 206.
In `@src/nix/meson.build`:
- Around line 37-38: The build adds libmicrohttpd and src/nix/serve.cc
unconditionally which breaks Windows due to POSIX headers; guard both the
dependency (libmicrohttpd added to deps_private) and the source inclusion of
serve.cc with a platform check using host_machine.system() != 'windows' (same
pattern used for unix/daemon.cc and unix/store-roots-daemon.cc) in
src/nix/meson.build, and mirror the same conditional guards for the
dependency/source in src/nix/package.nix so Windows builds skip libmicrohttpd
and serve.cc.
In `@tests/functional/binary-cache.sh`:
- Around line 103-113: The if-condition checking binaryCache should match the
actual local cache URL (file://$cacheDir) so the WantMassQuery branch runs;
update the condition that references binaryCache to compare against the expanded
local cache URL (use the shell-expanded value of cacheDir, e.g. test for
"file://$cacheDir" or, if you intend to allow any file:// path, use a prefix
match like file://*), leaving the rest of the block (WantMassQuery write,
clearStore, clearCacheCache, the nix-env grep lines and the prebuilt-only check)
unchanged so the local cache branch is executed when basicDownloadTests is
called with file://$cacheDir.
- Around line 55-59: The stopNixServe helper uses plain kill which sends SIGTERM
and causes wait to return 143 under set -e; change stopNixServe to send SIGINT
(e.g., kill -INT "$nixServePid") or catch/ignore the non-zero wait status so
restartNixServe won't fail, and ensure you unset nixServePid after waiting;
additionally fix the unreachable local-cache branch in basicDownloadTests by
changing the strict equality check against the literal "file://" to a prefix
check (e.g., test whether the passed URL starts with "file://") so the
WantMassQuery assertions actually run when basicDownloadTests is invoked with
"file://$cacheDir".
---
Outside diff comments:
In `@src/libstore/filetransfer.cc`:
- Around line 506-513: The condition controlling CURLOPT_ACCEPT_ENCODING is
inverted: enable transparent decompression only for fresh downloads (not when
resuming or when sending data). Change the check around curl_easy_setopt(req,
CURLOPT_ACCEPT_ENCODING, "") to require that requestRange is false and
request.data is false (i.e. if (!requestRange && !request.data)) so
CURLOPT_ACCEPT_ENCODING is set for normal downloads but skipped for uploads and
resumed byte-range requests; update the surrounding comment if needed to match
the corrected logic.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5bdb7899-9dfd-464c-860a-a5306c14dabe
📒 Files selected for processing (14)
src/libfetchers/git-lfs-fetch.ccsrc/libfetchers/git-utils.ccsrc/libfetchers/include/nix/fetchers/git-utils.hhsrc/libstore/filetransfer.ccsrc/libstore/local-store.ccsrc/libstore/misc.ccsrc/libstore/nar-info.ccsrc/libutil/include/nix/util/deleter.hhsrc/libutil/include/nix/util/file-system.hhsrc/libutil/include/nix/util/meson.buildsrc/nix/meson.buildsrc/nix/package.nixsrc/nix/serve.cctests/functional/binary-cache.sh
💤 Files with no reviewable changes (1)
- src/libfetchers/include/nix/fetchers/git-utils.hh
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/nix/serve.cc (1)
52-52: 💤 Low valueConsider wrapping
std::stoifor better error messages.
std::stoithrowsstd::invalid_argumentorstd::out_of_rangewith generic messages if the input is malformed. Users would see an unclear exception instead of a helpful "invalid priority value" message.💡 Suggested improvement
addFlag({ .longName = "priority", .description = "Priority of this cache (overrides the store's default).", .labels = {"priority"}, - .handler = {[this](std::string s) { priority = std::stoi(s); }}, + .handler = {[this](std::string s) { + try { + priority = std::stoi(s); + } catch (std::exception &) { + throw Error("invalid priority value '%s'", s); + } + }}, });🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/nix/serve.cc` at line 52, The handler lambda that sets priority uses std::stoi directly which can throw std::invalid_argument or std::out_of_range with unclear messages; change the lambda (the .handler = {[this](std::string s) { priority = std::stoi(s); }}) to call a small helper function (e.g., parsePriority or parse_int_with_context) that wraps std::stoi in a try/catch and rethrows a std::invalid_argument or std::runtime_error with a clear message like "invalid priority value: '<input>'" (and include range info if out_of_range), then assign the returned int to priority; this keeps the handler concise and provides user-friendly error messages.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/nix/serve.cc`:
- Around line 231-251: The code can leak the HTTP daemon if writeFile throws
after MHD_start_daemon succeeds; ensure the daemon is always stopped by
installing RAII or a scope guard around the daemon handle (the pointer returned
by MHD_start_daemon) so MHD_stop_daemon(daemon) is called on all exit paths, or
wrap the daemon in a std::unique_ptr with a custom deleter that calls
MHD_stop_daemon; move the portFile/writeFile call inside that guarded scope (or
perform it before waiting on interruptPromise) and keep
interruptPromise/interruptFuture and createInterruptCallback logic unchanged so
normal shutdown still uses the same flow.
---
Nitpick comments:
In `@src/nix/serve.cc`:
- Line 52: The handler lambda that sets priority uses std::stoi directly which
can throw std::invalid_argument or std::out_of_range with unclear messages;
change the lambda (the .handler = {[this](std::string s) { priority =
std::stoi(s); }}) to call a small helper function (e.g., parsePriority or
parse_int_with_context) that wraps std::stoi in a try/catch and rethrows a
std::invalid_argument or std::runtime_error with a clear message like "invalid
priority value: '<input>'" (and include range info if out_of_range), then assign
the returned int to priority; this keeps the handler concise and provides
user-friendly error messages.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 92a14714-053e-4622-922c-a859fe823497
📒 Files selected for processing (13)
src/libfetchers/git-lfs-fetch.ccsrc/libfetchers/git-utils.ccsrc/libfetchers/include/nix/fetchers/git-utils.hhsrc/libstore/local-store.ccsrc/libstore/misc.ccsrc/libstore/nar-info.ccsrc/libutil/include/nix/util/deleter.hhsrc/libutil/include/nix/util/file-system.hhsrc/libutil/include/nix/util/meson.buildsrc/nix/meson.buildsrc/nix/package.nixsrc/nix/serve.cctests/functional/binary-cache.sh
💤 Files with no reviewable changes (1)
- src/libfetchers/include/nix/fetchers/git-utils.hh
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/nix/serve.cc`:
- Around line 108-109: When calling store.queryPathInfo(*path) in serve.cc (the
lines that construct NarInfo ni(*info)), catch the exception for missing/stale
path-info and convert it to a not-found (404) response instead of letting it
bubble up as a 500; apply the same change to the second occurrence around the
Nar pre-read at the later call (the other queryPathInfo/NarInfo site). In
practice, wrap the queryPathInfo call in a try/catch (or check for an optional
result if available), and on the missing-path exception return the same
404-handling codepath used by the NAR pre-read branch so cache races map to 404
rather than 500.
- Around line 87-91: The 405 handler currently permits GET and HEAD but sets the
Allow header to only MHD_HTTP_METHOD_GET; update the Allow header value passed
to MHD_add_response_header (in the same block that checks method !=
MHD_HTTP_METHOD_GET && method != MHD_HTTP_METHOD_HEAD) to advertise both methods
(e.g. "GET, HEAD") instead of the single constant, so the call to
MHD_add_response_header(response.get(), "Allow", ...) lists both GET and HEAD.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@tests/functional/binary-cache.sh`:
- Around line 68-74: The wait loop for the port file can hang indefinitely;
modify the loop that checks [[ ! -e "$portFile" ]] and uses nixServePid so it
enforces a 30-second startup timeout using the bash SECONDS variable (e.g., set
SECONDS=0 before the loop), and if SECONDS exceeds 30 then kill -TERM (or kill
-9 if necessary) the process referenced by nixServePid, emit an error like "nix
serve timed out waiting for $portFile" to stderr, clean up, and return 1; ensure
the timeout branch mirrors existing error handling (uses nixServePid and
$portFile) so the test exits instead of hanging.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ad873722-8177-48a0-8afa-a3cdd99daec3
📒 Files selected for processing (1)
tests/functional/binary-cache.sh
These fields are in fact optional, and various binary cache implementations do not return them (e.g. when NARs are compressed on the fly).
This is a built-in binary cache server, similar to nix-serve. It uses libmicrohttpd.
If we have a negative path info cache entry for the path being added (e.g. due to a prior call to maybeQueryPathInfo()), then we need to use isValidPathUncached(), otherwise addToStore() will fail.
An operation like $ nix path-info --store https://cache.nixos.org /nix/store/2lwxwsaixnaq561hbq02v5236935ass8-coreutils-full-9.8 -vv previously resulted in multiple downloads of the same .narinfo: downloading 'https://cache.nixos.org/2lwxwsaixnaq561hbq02v5236935ass8.narinfo'... querying info about '/nix/store/2lwxwsaixnaq561hbq02v5236935ass8-coreutils-full-9.8' on 'https://cache.nixos.org'... downloading 'https://cache.nixos.org/2lwxwsaixnaq561hbq02v5236935ass8.narinfo'... This is because queryMissing() calls isValidPath(), which causes a HEAD request to the binary cache. Subsequently we call queryValidPath(), which requires an additional GET request. It's better to do a HEAD request right away. (Of course, it's a bit questionable whether `nix path-info` should call queryMissing() in the first place...)
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@packaging/dependencies.nix`:
- Around line 151-154: The patch replaces the base libmicrohttpd configureFlags,
which can drop defaults; update the override in
pkgs.libmicrohttpd.overrideDerivation so it preserves and extends the original
flags instead of overwriting them — read the existing flags from
old.configureFlags (or default to an empty list) and append "--without-gnutls"
to that list so libmicrohttpd keeps its default configureFlags while disabling
GnuTLS.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0b591e1c-ebf6-4ee5-98e0-4244e7b58046
📒 Files selected for processing (1)
packaging/dependencies.nix
Motivation
This provides a built-in binary cache server, similar to
nix-serve. It's mostly intended for testing and to serve as a reference implementation of binary caches.Context
Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Tests
Chores