Conversation
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
ReadWorkspaceFile and WriteWorkspaceFile now encrypt/decrypt file content using XChaCha20-Poly1305 via the existing sigil pipeline. A 32-byte symmetric key is derived by SHA-256-hashing the workspace's stored private.key material so no new dependencies are required. Co-Authored-By: Virgil <virgil@lethean.io>
Co-Authored-By: Virgil <virgil@lethean.io>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRemoved legacy client implementations/tests and introduced consolidated Medium implementations (MemoryMedium, local.Medium, datanode.Medium). Updated the Changes
Sequence Diagram(s)mermaid 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
|
Closing — will create fresh PR from current dev state |
Dev renamed datanode/local client files to better names and updated interfaces. Accept dev's versions throughout; remove old files that github/main still had as modify-vs-delete conflicts. Co-Authored-By: Virgil <virgil@lethean.io>
There was a problem hiding this comment.
Actionable comments posted: 11
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docs/index.md (1)
8-8:⚠️ Potential issue | 🟡 MinorDocumentation import paths inconsistent with PR migration objectives.
The PR objective states this is a "module path migration from forge.lthn.ai to dappco.re", but the documentation still references
forge.lthn.ai/core/go-iothroughout. The import examples should be updated to use the newdappco.repaths to maintain consistency with the actual code changes.📝 Suggested updates for import paths
-`forge.lthn.ai/core/go-io` is a storage abstraction library... +`dappco.re/go/core/io` is a storage abstraction library...And in the Quick Start section:
import ( - io "forge.lthn.ai/core/go-io" - "forge.lthn.ai/core/go-io/s3" - "forge.lthn.ai/core/go-io/node" + io "dappco.re/go/core/io" + "dappco.re/go/core/io/s3" + "dappco.re/go/core/io/node" )Also applies to: 17-20
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` at line 8, Update all documentation import paths that still reference `forge.lthn.ai/core/go-io` to the new module path `dappco.re/core/go-io`; search the docs (including the Quick Start section and occurrences around the `forge.lthn.ai/core/go-io` string) and replace those import examples and import statements so they match the PR's module migration, ensuring any code snippets that import `Medium` or other symbols now use `dappco.re/core/go-io`.s3/s3_test.go (1)
149-176:⚠️ Potential issue | 🟡 MinorListObjectsV2 mock incorrectly sets IsTruncated when results are truncated.
When the number of contents exceeds
maxKeys, the loop breaks early (line 149-151), butIsTruncatedis always set tofalseat line 175. This could mask pagination-related bugs in production code if tests rely on this mock for pagination testing.🔧 Suggested fix
+ isTruncated := false for _, k := range allKeys { rest := core.TrimPrefix(k, prefix) if delimiter != "" { parts := core.SplitN(rest, delimiter, 2) if len(parts) == 2 { cp := core.Concat(prefix, parts[0], delimiter) commonPrefixes[cp] = true continue } } if int32(len(contents)) >= maxKeys { + isTruncated = true break } // ... rest of loop } // ... return &awss3.ListObjectsV2Output{ Contents: contents, CommonPrefixes: cpSlice, - IsTruncated: aws.Bool(false), + IsTruncated: aws.Bool(isTruncated), }, nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s3/s3_test.go` around lines 149 - 176, The ListObjectsV2 mock currently always returns IsTruncated: aws.Bool(false) even when the loop breaks due to exceeding maxKeys; update the mock (the ListObjectsV2 implementation handling maxKeys, contents, commonPrefixes) to set IsTruncated to true when len(contents) >= maxKeys or when you detect an early break, and also populate NextContinuationToken (or ContinuationToken behavior used by the code) so callers can exercise pagination; adjust the return to conditionally set IsTruncated and include a continuation token string when truncated.
🧹 Nitpick comments (13)
node/node.go (1)
336-339:WriteModeignores the mode parameter.The mode parameter is silently ignored for in-memory nodes, which is reasonable since file permissions don't apply to in-memory storage. Consider adding a comment explaining this design decision to prevent confusion.
📝 Proposed documentation improvement
// Example: _ = nodeTree.WriteMode("keys/private.key", key, 0600) +// Note: mode is ignored for in-memory nodes as file permissions don't apply. func (node *Node) WriteMode(filePath, content string, mode fs.FileMode) error { return node.Write(filePath, content) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 336 - 339, The WriteMode method on Node currently ignores the mode parameter (it simply calls Node.Write), which can confuse readers; update the Node.WriteMode method to include a clear comment stating that file permission modes are intentionally ignored for in-memory nodes (and that callers should use platform/filesystem-backed implementations if they need to persist modes), e.g., annotate the WriteMode function (and mention Node.Write) to document this design decision so future maintainers understand why the mode parameter is unused.store/store.go (1)
13-14: Inconsistent sentinel error declaration.
NotFoundErroruseserrors.New()whilesigil/crypto_sigil.gousescore.E()for its sentinel errors (e.g.,InvalidKeyError,CiphertextTooShortError). Consider usingcore.E()for consistency across the codebase.♻️ Proposed fix for consistency
-// Example: _, err := keyValueStore.Get("app", "theme") -var NotFoundError = errors.New("key not found") +// Example: _, err := keyValueStore.Get("app", "theme") +// Example: errors.Is(err, store.NotFoundError) +var NotFoundError = core.E("store.NotFoundError", "key not found", nil)Note: You can then remove the
"errors"import if it's no longer used elsewhere.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@store/store.go` around lines 13 - 14, Replace the sentinel error NotFoundError's declaration to use the project's core.E() helper for consistency with other sentinel errors (e.g., InvalidKeyError, CiphertextTooShortError) — update the declaration of NotFoundError to call core.E("key not found") and remove the "errors" import if it becomes unused; ensure references to NotFoundError elsewhere remain unchanged.datanode/medium_test.go (1)
3-12: Consider aliasing standardioimport for consistency.The coding guidelines suggest using
goiofor the standard libraryiopackage to avoid collision with thedappco.re/go/core/iopackage. While there's no actual collision here (onlycoreiois imported), maintaining the alias convention across all files aids consistency.♻️ Optional: Use consistent import alias
import ( - "io" + goio "io" "io/fs" "testing"Then update usages like
io.ReadAlltogoio.ReadAll.As per coding guidelines: "Standard library
iopackage must be aliased to avoid collision with theforge.lthn.ai/core/go-iopackage. Usegoiofor the standard library".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@datanode/medium_test.go` around lines 3 - 12, The standard library io import should be aliased to goio for consistency with the project's convention and to avoid confusion with coreio; update the import alias from "io" to "goio" alongside the existing coreio import, and then update all usages (e.g., io.ReadAll) in this file to use goio.ReadAll so references to coreio (coreio.*) remain unambiguous.sigil/crypto_sigil.go (1)
265-268: Consider preserving the original decryption error for debugging.The original error from
aead.Open()is discarded. While this prevents information leakage about why decryption failed (a security consideration), it may hinder debugging legitimate issues. If this is intentional for security, consider adding a comment explaining the rationale.💡 Optional: Add explanatory comment
obfuscated, err := aead.Open(nil, nonce, ciphertext, nil) if err != nil { + // Intentionally discard original error to prevent oracle attacks return nil, core.E("sigil.ChaChaPolySigil.Out", "decrypt ciphertext", DecryptionFailedError) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sigil/crypto_sigil.go` around lines 265 - 268, The code in ChaChaPolySigil.Out currently discards the original error from aead.Open; either preserve the original error for debugging by wrapping or annotating it into the returned core.E (e.g., include err.Error() or the error as a wrapped cause) so callers/logs can see the underlying cause, or if discarding is intentional for security, add a clear comment above the aead.Open error branch explaining that the raw decryption error is intentionally not exposed and why; locate the handling in the function where aead.Open is called and update the return or add the explanatory comment accordingly.datanode/medium.go (1)
525-543:removeFileLockedcauses O(n²) complexity for bulk deletions.Each call to
removeFileLockedcollects all entries and rebuilds the entireDataNode, making it O(n) per deletion. WhenDeleteAllinvokesremoveFileLockedin a loop for multiple files, bulk operations become O(n²). Consider whether the underlyingborgdatanode.DataNodepackage provides a more efficient deletion API that could avoid full rebuilds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@datanode/medium.go` around lines 525 - 543, removeFileLocked currently rebuilds the entire DataNode for each deletion by calling collectAllLocked and recreating borgdatanode.New every time, causing O(n²) on bulk deletes; instead, change the approach so DeleteAll (or callers) collect the full entries once and perform a single rebuild or leverage a borgdatanode deletion API if available. Specifically, update logic around removeFileLocked / DeleteAll to either (a) accept a slice/set of targets and in one pass call medium.readFileLocked for non-targets and populate a single newDataNode before assigning medium.dataNode, or (b) if borgdatanode exposes Delete/Remove methods, iterate targets and call that API on medium.dataNode without full rebuilds; reference functions/types: removeFileLocked, collectAllLocked, medium.readFileLocked, medium.dataNode, borgdatanode.New, and DeleteAll to locate and change the implementation.store/medium.go (1)
174-179: List order is non-deterministic.The iteration over
all(amap[string]string) has non-deterministic order in Go. If consistent ordering is important for callers, consider sorting the entries by key before returning.♻️ Suggested fix for deterministic ordering
+import "sort" + func (medium *Medium) List(entryPath string) ([]fs.DirEntry, error) { // ... existing code ... all, err := medium.keyValueStore.GetAll(group) if err != nil { return nil, err } + keys := make([]string, 0, len(all)) + for key := range all { + keys = append(keys, key) + } + sort.Strings(keys) var entries []fs.DirEntry - for key, value := range all { + for _, key := range keys { + value := all[key] entries = append(entries, &keyValueDirEntry{name: key, size: int64(len(value))}) } return entries, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@store/medium.go` around lines 174 - 179, The iteration over the map variable `all` produces non-deterministic ordering; change the code that builds `entries` (using the `keyValueDirEntry` type) to collect and sort the map keys first (e.g. into a []string), then iterate the sorted keys to append entries so the returned []fs.DirEntry is deterministic; update the function that returns `entries` to use the sorted-key order.s3/s3.go (1)
163-166: Document thatWriteModeignores the mode parameter.S3 doesn't support POSIX file permissions, so ignoring the
modeparameter is correct. However, this behaviour should be documented for callers who might expect the mode to be preserved.📝 Suggested documentation
-// Example: _ = medium.WriteMode("keys/private.key", key, 0600) +// Example: _ = medium.WriteMode("keys/private.key", key, 0600) +// Note: S3 does not support file permissions; the mode parameter is ignored. func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error { return medium.Write(filePath, content) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@s3/s3.go` around lines 163 - 166, Update the comment for the WriteMode method to explicitly state that the mode parameter is ignored because S3 does not support POSIX file permissions; reference the Medium.Write implementation (WriteMode calls medium.Write(filePath, content)) and clarify for callers that permissions are not preserved and the mode parameter is only present for API compatibility. Ensure the doc comment sits immediately above func (medium *Medium) WriteMode(filePath, content string, mode fs.FileMode) error so tools and users see the behavior.medium_test.go (1)
427-432: Misleading variable name.The variable
memoryMediumon line 430 is assigned fromio.Local, which is a local filesystem medium, not an in-memory medium. Consider renaming tolocalMediumor simplymediumfor clarity.♻️ Suggested fix
func TestIO_LocalGlobal_Good(t *testing.T) { assert.NotNil(t, Local, "io.Local should be initialised") - var memoryMedium = Local - assert.NotNil(t, memoryMedium) + var medium = Local + assert.NotNil(t, medium) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@medium_test.go` around lines 427 - 432, The variable name memoryMedium in TestIO_LocalGlobal_Good is misleading because it references Local (a local filesystem medium); rename memoryMedium to a clearer name such as localMedium or simply medium across the test function (keep the existing assertions and assignments to Local) so the variable name matches the actual Medium being tested (update all uses in TestIO_LocalGlobal_Good to the new identifier and run tests).workspace/service.go (2)
283-289: Unusual path separator detection.The separator detection via
CORE_PATH_SEPARATORandDSenvironment variables with a/fallback is non-standard. Consider usingfilepath.Separatorfrom the standard library, which correctly handles platform differences.♻️ Suggested fix
+import "path/filepath" + func joinPathWithinRoot(root string, parts ...string) (string, error) { candidate := core.Path(append([]string{root}, parts...)...) - separator := core.Env("CORE_PATH_SEPARATOR") - if separator == "" { - separator = core.Env("DS") - } - if separator == "" { - separator = "/" - } + separator := string(filepath.Separator) if candidate == root || core.HasPrefix(candidate, root+separator) { return candidate, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspace/service.go` around lines 283 - 289, Replace the non-standard env-based path separator logic in workspace/service.go (the variable named "separator") with the platform-correct value from the standard library: import "path/filepath" and set separator using filepath.Separator (convert to string if the code expects a string) instead of reading CORE_PATH_SEPARATOR or DS or falling back to "/"; update any dependent code that uses separator accordingly so it uses the standardized value from filepath.Separator.
191-192: Consider using a proper key derivation function.The encryption key is derived by directly hashing the private key with SHA-256. While this produces a 32-byte key suitable for XChaCha20-Poly1305, using a purpose-built key derivation function like HKDF would provide better cryptographic hygiene by incorporating a salt and context string.
🔐 Example using HKDF
import "golang.org/x/crypto/hkdf" // In workspaceCipherSigil: hkdfReader := hkdf.New(sha256.New, []byte(rawKey), nil, []byte("workspace-file-encryption")) derived := make([]byte, 32) if _, err := io.ReadFull(hkdfReader, derived); err != nil { return nil, core.E(operation, "failed to derive key", err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@workspace/service.go` around lines 191 - 192, Replace the direct SHA-256 hashing key derivation in workspaceCipherSigil (where derived := sha256.Sum256([]byte(rawKey)) and sigil.NewChaChaPolySigil is called) with an HKDF-based derivation: use golang.org/x/crypto/hkdf with sha256.New, provide the rawKey as input keying material, an optional salt (or nil) and a context/info string like "workspace-file-encryption", read 32 bytes into a derived slice via io.ReadFull, then pass that slice to sigil.NewChaChaPolySigil; also add necessary imports (hkdf, io, crypto/sha256) and return an error if io.ReadFull fails.sigil/sigils.go (1)
206-207: Consider checking the error fromhasher.Write.The error return from
hasher.Write(data)is ignored. While hash implementations in the standard library typically don't return errors (they panic on invalid state), thegoio.Writerinterface contract includes an error return that should ideally be handled for robustness.🛡️ Suggested fix
- hasher.Write(data) - return hasher.(interface{ Sum([]byte) []byte }).Sum(nil), nil + if _, err := hasher.Write(data); err != nil { + return nil, core.E("sigil.HashSigil.In", "hash write failed", err) + } + return hasher.(interface{ Sum([]byte) []byte }).Sum(nil), nil🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sigil/sigils.go` around lines 206 - 207, The call to hasher.Write(data) ignores its error; modify the code around hasher.Write in sigils.go so you capture and handle the returned error (e.g., err := hasher.Write(data); if err != nil { return nil, err }) before calling hasher.(interface{ Sum([]byte) []byte }).Sum(nil); keep the same hasher variable and Sum type assertion but return the write error to the caller for robustness.sqlite/sqlite.go (1)
645-654:sqliteWriteCloser.Closeuses hardcoded mode instead of preserving the intended mode.When files are created via
CreateorWriteStream, theClosemethod hardcodes mode 420 (0644). If a caller expectsWriteModesemantics to be honoured through the writer interface, this would not work as expected.Consider storing the mode in
sqliteWriteCloserand using it inClose.♻️ Suggested fix
type sqliteWriteCloser struct { medium *Medium path string data []byte + mode int } func (medium *Medium) Create(filePath string) (goio.WriteCloser, error) { key := normaliseEntryPath(filePath) if key == "" { return nil, core.E("sqlite.Create", "path is required", fs.ErrInvalid) } return &sqliteWriteCloser{ medium: medium, path: key, + mode: 420, // 0644 default }, nil } func (writer *sqliteWriteCloser) Close() error { _, err := writer.medium.database.Exec( - `INSERT INTO `+writer.medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, 420, FALSE, ?) + `INSERT INTO `+writer.medium.table+` (path, content, mode, is_dir, mtime) VALUES (?, ?, ?, FALSE, ?) ON CONFLICT(path) DO UPDATE SET content = excluded.content, is_dir = FALSE, mtime = excluded.mtime`, - writer.path, writer.data, time.Now().UTC(), + writer.path, writer.data, writer.mode, time.Now().UTC(), )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqlite/sqlite.go` around lines 645 - 654, sqliteWriteCloser.Close currently hardcodes mode 420 (0644) which ignores modes set by Create/WriteStream/WriteMode; update the sqliteWriteCloser struct to include a mode field (e.g., mode os.FileMode or int) that is set when the writer is created by Create or WriteStream, then change sqliteWriteCloser.Close to use that stored mode in the INSERT/UPDATE instead of 420 (with a sensible default if mode is zero). Locate the writer creation sites (functions Create, WriteStream, or where sqliteWriteCloser instances are constructed) to populate the new mode field and ensure sqliteWriteCloser.Close uses that field in its Exec call.docs/RFC.md (1)
68-83: Remove duplicated method documentation.Lines 68-83 document
ReadandWriteagain with "Alias for" descriptions, but these are the same methods already documented at lines 26-41. This appears to be a copy-paste error. The same pattern repeats throughout the document forMemoryMedium(lines 442-455),local.Medium(lines 786-799),node.Node(lines 961-976),store.Medium(lines 1273-1286),sqlite.Medium(lines 1461-1474),s3.Medium(lines 1656-1670), anddatanode.Medium(lines 1872-1885).Consider removing these duplicate sections to avoid confusion.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC.md` around lines 68 - 83, Remove the duplicated "Alias for `Read`" and "Alias for `Write`" documentation blocks that were copy-pasted; keep the original canonical method docs (the first occurrences for Read and Write) and delete the repeated alias sections found under the types MemoryMedium, local.Medium, node.Node, store.Medium, sqlite.Medium, s3.Medium, and datanode.Medium; ensure all references to the Read and Write methods remain accurate on each type (e.g., MemoryMedium.Read/Write, local.Medium.Read/Write, node.Node.Read/Write, store.Medium.Read/Write, sqlite.Medium.Read/Write, s3.Medium.Read/Write, datanode.Medium.Read/Write) but remove the redundant duplicate paragraphs to avoid confusion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@datanode/medium.go`:
- Around line 410-429: Append currently holds medium.lock.RLock() and then calls
IsFile(filePath) which itself acquires the lock; to avoid this fragile
double-lock pattern create a lock-free helper isFileLocked(filePath) that
assumes the caller already holds the appropriate lock and performs the same
check IsFile does without acquiring any locks. In Append, replace the call to
IsFile(filePath) with isFileLocked(filePath) while keeping the RLock/unlock and
keep using readFileLocked(filePath) as before; ensure isFileLocked reads the
same internal state (e.g., file map or metadata) that IsFile uses so behavior is
identical.
In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md`:
- Line 1: The document header "# RFC-025: Agent Experience (AX) Design
Principles" does not match the filename RFC-CORE-008-AGENT-EXPERIENCE.md; update
the title or the filename so they consistently reference the same RFC identifier
(e.g., change the heading to "# RFC-CORE-008: Agent Experience (AX) Design
Principles" or rename the file to RFC-025-AGENT-EXPERIENCE.md) and search for
any internal references or metadata inside the document that mention "RFC-025"
or "RFC-CORE-008" to update them accordingly (look for the heading string and
any in-text RFC identifiers).
In `@io.go`:
- Around line 274-286: The MemoryMedium currently allows a path to be both a
file and a directory; update MemoryMedium.WriteMode (and any Write/Create code
paths) and MemoryMedium.EnsureDir to detect and reject file/directory
collisions: before writing a file in WriteMode, call
ensureAncestorDirectories(path) but also verify none of the ancestors are stored
as a file (check medium.fileContents for any ancestor of path) and return an
error if so; before creating/ensuring a directory in EnsureDir, verify the
target path is not already present as a file in medium.fileContents and that no
descendant entries in medium.fileContents exist that would be shadowed by making
this path a directory, returning an error on collision; reuse
ensureAncestorDirectories/medium.fileContents/medium.directories symbols to
locate the logic and ensure behavior mirrors real local.Medium semantics.
- Around line 400-413: When moving files in the directory-rename branch also
copy the saved file mode so moved files keep their permissions: when iterating
filesToMove (where you already copy medium.fileContents and
medium.modificationTimes) add copying/deleting entries in medium.fileModes for
oldFilePath -> newFilePath. Also update Create (the writer-seeding code that
currently seeds new writers with 0644) to detect an existing file's mode in
medium.fileModes and seed the writer with that existing mode instead of always
using 0644 so truncating an existing 0600 file preserves its mode on Close.
In `@local/medium_test.go`:
- Around line 3-12: Rename the standard library import of io to use the alias
goio in the import block and update all standard-io usages accordingly: change
the import entry `io` to `goio` and replace calls to `io.LimitReader`,
`io.ReadAll`, and `io.Copy` with `goio.LimitReader`, `goio.ReadAll`, and
`goio.Copy` respectively (these appear in the test file where those symbols are
used). Ensure you do not alter references to the project package `core/io` (keep
those as-is) and run tests to verify no import collisions remain.
In `@local/medium.go`:
- Around line 200-246: validatePath currently resolves symlinks but returns a
plain path string which is raceable because callers (Read, WriteMode, etc.) pass
that path into unrestrictedFileSystem; fix by switching to a dirfd/openat-style
approach: change validatePath to return a safely opened file descriptor (or a
sequence of directory fds) or provide a helper that opens the final target with
O_NOFOLLOW/O_PATH and verifies via fstat that it's within medium.filesystemRoot
before performing IO, then update Read, WriteMode, Delete*/Rename to use that
fd-based API instead of unrestrictedFileSystem with raw paths; use
resolveSymlinksPath only for traversal, and ensure the final open uses no-follow
semantics and ownership checks to prevent TOCTOU symlink swaps.
- Around line 356-376: The Delete and DeleteAll paths allow removing the sandbox
root because validatePath("") and validatePath(".") resolve to
medium.filesystemRoot; update the guards so that any attempt to delete the
resolved path equal to medium.filesystemRoot is refused. Specifically, in
Medium.Delete and Medium.DeleteAll (or by enhancing isProtectedPath) check if
resolvedPath == medium.filesystemRoot (and/or the original path is "" or ".")
and return a core.E error like the existing protected-path cases; ensure the
same protection is applied before calling
unrestrictedFileSystem.Delete/DeleteAll to prevent the sandbox deleting its own
root.
In `@s3/s3.go`:
- Around line 155-156: The PutObjectInput.Body is being set to
core.NewReader(content) (an io.Reader) but awss3.PutObjectInput.Body requires an
io.ReadCloser; wrap the reader with goio.NopCloser. Update the PutObject call
that sets Body: core.NewReader(content) to Body:
goio.NopCloser(core.NewReader(content)) and ensure goio is imported so the
change mirrors usages in store/medium.go and s3_test.go.
In `@sigil/sigils.go`:
- Around line 89-106: GzipSigil.In currently always returns buffer.Bytes() even
when a custom sigil.outputWriter was provided (so buffer is unused and empty);
update the function so after gzipWriter.Close() it returns the actual written
bytes: if outputWriter is the local buffer return buffer.Bytes(); else if
outputWriter implements a Bytes() []byte method (e.g., *bytes.Buffer) call that
and return its result; otherwise return nil with a clear error (or document the
behavior) indicating that a custom outputWriter was supplied and no bytes are
available to return. Ensure you reference GzipSigil.In, sigil.outputWriter,
buffer and gzipWriter when making the change.
In `@sqlite/sqlite.go`:
- Around line 340-343: The SQL contains a redundant WHERE clause using "WHERE
path LIKE ? OR path LIKE ?" with identical arguments; update the Query call (the
medium.database.Query invocation that sets rows, err) to use a single condition
"WHERE path LIKE ?" and pass only one argument (prefix+"%"), or adjust the
second placeholder to the intended different pattern if there was a different
intention; ensure the medium.table interpolation remains the same and remove the
extra placeholder/argument to match the revised SQL.
- Around line 34-39: The table name is being concatenated into SQL (from
Options.Table) without validation in normaliseTableName; update
normaliseTableName to validate the provided table name against a strict
whitelist regex (e.g., only letters, digits and underscores) and either return
the validated name or fall back to the default "files"/return an error; also use
the validated name wherever you build SQL strings (search for occurrences that
concatenate Options.Table) to ensure only verified names are used, or
alternatively wrap the validated name in proper identifier quoting after
validation. Ensure the function name normaliseTableName and the Options.Table
usages are the points of change.
---
Outside diff comments:
In `@docs/index.md`:
- Line 8: Update all documentation import paths that still reference
`forge.lthn.ai/core/go-io` to the new module path `dappco.re/core/go-io`; search
the docs (including the Quick Start section and occurrences around the
`forge.lthn.ai/core/go-io` string) and replace those import examples and import
statements so they match the PR's module migration, ensuring any code snippets
that import `Medium` or other symbols now use `dappco.re/core/go-io`.
In `@s3/s3_test.go`:
- Around line 149-176: The ListObjectsV2 mock currently always returns
IsTruncated: aws.Bool(false) even when the loop breaks due to exceeding maxKeys;
update the mock (the ListObjectsV2 implementation handling maxKeys, contents,
commonPrefixes) to set IsTruncated to true when len(contents) >= maxKeys or when
you detect an early break, and also populate NextContinuationToken (or
ContinuationToken behavior used by the code) so callers can exercise pagination;
adjust the return to conditionally set IsTruncated and include a continuation
token string when truncated.
---
Nitpick comments:
In `@datanode/medium_test.go`:
- Around line 3-12: The standard library io import should be aliased to goio for
consistency with the project's convention and to avoid confusion with coreio;
update the import alias from "io" to "goio" alongside the existing coreio
import, and then update all usages (e.g., io.ReadAll) in this file to use
goio.ReadAll so references to coreio (coreio.*) remain unambiguous.
In `@datanode/medium.go`:
- Around line 525-543: removeFileLocked currently rebuilds the entire DataNode
for each deletion by calling collectAllLocked and recreating borgdatanode.New
every time, causing O(n²) on bulk deletes; instead, change the approach so
DeleteAll (or callers) collect the full entries once and perform a single
rebuild or leverage a borgdatanode deletion API if available. Specifically,
update logic around removeFileLocked / DeleteAll to either (a) accept a
slice/set of targets and in one pass call medium.readFileLocked for non-targets
and populate a single newDataNode before assigning medium.dataNode, or (b) if
borgdatanode exposes Delete/Remove methods, iterate targets and call that API on
medium.dataNode without full rebuilds; reference functions/types:
removeFileLocked, collectAllLocked, medium.readFileLocked, medium.dataNode,
borgdatanode.New, and DeleteAll to locate and change the implementation.
In `@docs/RFC.md`:
- Around line 68-83: Remove the duplicated "Alias for `Read`" and "Alias for
`Write`" documentation blocks that were copy-pasted; keep the original canonical
method docs (the first occurrences for Read and Write) and delete the repeated
alias sections found under the types MemoryMedium, local.Medium, node.Node,
store.Medium, sqlite.Medium, s3.Medium, and datanode.Medium; ensure all
references to the Read and Write methods remain accurate on each type (e.g.,
MemoryMedium.Read/Write, local.Medium.Read/Write, node.Node.Read/Write,
store.Medium.Read/Write, sqlite.Medium.Read/Write, s3.Medium.Read/Write,
datanode.Medium.Read/Write) but remove the redundant duplicate paragraphs to
avoid confusion.
In `@medium_test.go`:
- Around line 427-432: The variable name memoryMedium in TestIO_LocalGlobal_Good
is misleading because it references Local (a local filesystem medium); rename
memoryMedium to a clearer name such as localMedium or simply medium across the
test function (keep the existing assertions and assignments to Local) so the
variable name matches the actual Medium being tested (update all uses in
TestIO_LocalGlobal_Good to the new identifier and run tests).
In `@node/node.go`:
- Around line 336-339: The WriteMode method on Node currently ignores the mode
parameter (it simply calls Node.Write), which can confuse readers; update the
Node.WriteMode method to include a clear comment stating that file permission
modes are intentionally ignored for in-memory nodes (and that callers should use
platform/filesystem-backed implementations if they need to persist modes), e.g.,
annotate the WriteMode function (and mention Node.Write) to document this design
decision so future maintainers understand why the mode parameter is unused.
In `@s3/s3.go`:
- Around line 163-166: Update the comment for the WriteMode method to explicitly
state that the mode parameter is ignored because S3 does not support POSIX file
permissions; reference the Medium.Write implementation (WriteMode calls
medium.Write(filePath, content)) and clarify for callers that permissions are
not preserved and the mode parameter is only present for API compatibility.
Ensure the doc comment sits immediately above func (medium *Medium)
WriteMode(filePath, content string, mode fs.FileMode) error so tools and users
see the behavior.
In `@sigil/crypto_sigil.go`:
- Around line 265-268: The code in ChaChaPolySigil.Out currently discards the
original error from aead.Open; either preserve the original error for debugging
by wrapping or annotating it into the returned core.E (e.g., include err.Error()
or the error as a wrapped cause) so callers/logs can see the underlying cause,
or if discarding is intentional for security, add a clear comment above the
aead.Open error branch explaining that the raw decryption error is intentionally
not exposed and why; locate the handling in the function where aead.Open is
called and update the return or add the explanatory comment accordingly.
In `@sigil/sigils.go`:
- Around line 206-207: The call to hasher.Write(data) ignores its error; modify
the code around hasher.Write in sigils.go so you capture and handle the returned
error (e.g., err := hasher.Write(data); if err != nil { return nil, err })
before calling hasher.(interface{ Sum([]byte) []byte }).Sum(nil); keep the same
hasher variable and Sum type assertion but return the write error to the caller
for robustness.
In `@sqlite/sqlite.go`:
- Around line 645-654: sqliteWriteCloser.Close currently hardcodes mode 420
(0644) which ignores modes set by Create/WriteStream/WriteMode; update the
sqliteWriteCloser struct to include a mode field (e.g., mode os.FileMode or int)
that is set when the writer is created by Create or WriteStream, then change
sqliteWriteCloser.Close to use that stored mode in the INSERT/UPDATE instead of
420 (with a sensible default if mode is zero). Locate the writer creation sites
(functions Create, WriteStream, or where sqliteWriteCloser instances are
constructed) to populate the new mode field and ensure sqliteWriteCloser.Close
uses that field in its Exec call.
In `@store/medium.go`:
- Around line 174-179: The iteration over the map variable `all` produces
non-deterministic ordering; change the code that builds `entries` (using the
`keyValueDirEntry` type) to collect and sort the map keys first (e.g. into a
[]string), then iterate the sorted keys to append entries so the returned
[]fs.DirEntry is deterministic; update the function that returns `entries` to
use the sorted-key order.
In `@store/store.go`:
- Around line 13-14: Replace the sentinel error NotFoundError's declaration to
use the project's core.E() helper for consistency with other sentinel errors
(e.g., InvalidKeyError, CiphertextTooShortError) — update the declaration of
NotFoundError to call core.E("key not found") and remove the "errors" import if
it becomes unused; ensure references to NotFoundError elsewhere remain
unchanged.
In `@workspace/service.go`:
- Around line 283-289: Replace the non-standard env-based path separator logic
in workspace/service.go (the variable named "separator") with the
platform-correct value from the standard library: import "path/filepath" and set
separator using filepath.Separator (convert to string if the code expects a
string) instead of reading CORE_PATH_SEPARATOR or DS or falling back to "/";
update any dependent code that uses separator accordingly so it uses the
standardized value from filepath.Separator.
- Around line 191-192: Replace the direct SHA-256 hashing key derivation in
workspaceCipherSigil (where derived := sha256.Sum256([]byte(rawKey)) and
sigil.NewChaChaPolySigil is called) with an HKDF-based derivation: use
golang.org/x/crypto/hkdf with sha256.New, provide the rawKey as input keying
material, an optional salt (or nil) and a context/info string like
"workspace-file-encryption", read 32 bytes into a derived slice via io.ReadFull,
then pass that slice to sigil.NewChaChaPolySigil; also add necessary imports
(hkdf, io, crypto/sha256) and return an error if io.ReadFull fails.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: d41006d8-d0cc-48ac-85ea-e22e6160a56f
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (39)
CLAUDE.mdbench_test.goclient_test.godatanode/client.godatanode/client_test.godatanode/medium.godatanode/medium_test.godoc.godocs/RFC-CORE-008-AGENT-EXPERIENCE.mddocs/RFC.mddocs/architecture.mddocs/development.mddocs/index.mdgo.modio.golocal/client.golocal/client_test.golocal/medium.golocal/medium_test.gomedium_test.gonode/node.gonode/node_test.gos3/s3.gos3/s3_test.gosigil/crypto_sigil.gosigil/crypto_sigil_test.gosigil/sigil.gosigil/sigil_test.gosigil/sigils.gosqlite/sqlite.gosqlite/sqlite_test.gostore/doc.gostore/medium.gostore/medium_test.gostore/store.gostore/store_test.goworkspace/doc.goworkspace/service.goworkspace/service_test.go
💤 Files with no reviewable changes (5)
- local/client_test.go
- client_test.go
- datanode/client_test.go
- local/client.go
- datanode/client.go
- datanode: add isFileLocked() helper to prevent RLock re-entry deadlock in Append - io: MemoryMedium WriteMode rejects ancestor-is-file collision; EnsureDir rejects target-is-file collision - io: copy fileModes during directory rename - local: guard Delete/DeleteAll against removing sandbox root - local: add TOCTOU TODO comment on validatePath symlink loop - local: alias stdlib io→goio in medium_test.go - datanode: alias stdlib io→goio in medium_test.go - sqlite: add isValidTableName() whitelist to prevent table-name SQL injection in New() - sqlite: remove duplicate WHERE clause args in List query - sqlite: add mode field to sqliteWriteCloser; use it in Close (was hardcoded 420) - sigil: GzipSigil.In returns nil when custom outputWriter is used (buffer was empty) - sigil: capture hasher.Write error in HashSigil.In - sigil: add comment explaining DecryptionFailedError hides raw AEAD error intentionally - s3: add comment explaining WriteMode ignores mode (no POSIX on S3) - s3_test: ListObjectsV2 mock sets IsTruncated+NextContinuationToken when maxKeys exceeded - node: add comment explaining WriteMode ignores mode for in-memory nodes - store: sort keys before building List entries for deterministic output - store: add explanatory comment on NotFoundError sentinel - workspace: replace sha256.Sum256 key derivation with HKDF (RFC 5869) - docs: fix RFC-CORE-008 header (was RFC-025) - docs: update import paths from forge.lthn.ai/core/go-io to dappco.re/go/core/io - docs/RFC.md: remove duplicate Read/Write alias doc blocks Co-Authored-By: Virgil <virgil@lethean.io>
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (11)
sigil/crypto_sigil.go (1)
212-212: Receiversigilshadows the package name.Using
sigilas the receiver name can cause confusion since it shadows the package name, making it impossible to reference other package-level symbols within these methods without qualification issues.♻️ Consider renaming receiver
-func (sigil *ChaChaPolySigil) In(data []byte) ([]byte, error) { +func (s *ChaChaPolySigil) In(data []byte) ([]byte, error) {Apply the same pattern to the
Outmethod.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sigil/crypto_sigil.go` at line 212, The method receiver for ChaChaPolySigil currently uses the identifier "sigil", which shadows the package name; rename the receiver to a non-conflicting short name (e.g., "s" or "cps") in both the In(data []byte) ([]byte, error) and Out(...) methods so package-level symbols remain addressable; update all uses of the receiver within those methods to the new name and ensure method signatures on the ChaChaPolySigil type remain unchanged.node/node.go (6)
403-417:Renamedoes not support directory renaming.The current implementation only renames individual files. If
oldPathis a directory (inferred from being a prefix of file paths), the method returns "path not found" error. This differs from typical filesystem behaviour where renaming a directory renames all contained files.Consider documenting this limitation or implementing directory rename support by updating the prefix of all affected file paths.
♻️ Example implementation supporting directory rename
func (node *Node) Rename(oldPath, newPath string) error { oldPath = core.TrimPrefix(oldPath, "/") newPath = core.TrimPrefix(newPath, "/") + // Handle file rename file, ok := node.files[oldPath] - if !ok { - return core.E("node.Rename", core.Concat("path not found: ", oldPath), fs.ErrNotExist) + if ok { + file.name = newPath + node.files[newPath] = file + delete(node.files, oldPath) + return nil } - file.name = newPath - node.files[newPath] = file - delete(node.files, oldPath) + // Handle directory rename + prefix := oldPath + "/" + found := false + toRename := make(map[string]string) + for filePath := range node.files { + if core.HasPrefix(filePath, prefix) { + newFilePath := newPath + "/" + core.TrimPrefix(filePath, prefix) + toRename[filePath] = newFilePath + found = true + } + } + if !found { + return core.E("node.Rename", core.Concat("path not found: ", oldPath), fs.ErrNotExist) + } + for old, new := range toRename { + f := node.files[old] + f.name = new + node.files[new] = f + delete(node.files, old) + } return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 403 - 417, Node.Rename currently only handles exact file keys and returns "path not found" for directories; update it to detect when oldPath is a directory prefix and rename all affected entries. In the Node.Rename function, after trimming prefixes and checking node.files[oldPath], if not found scan node.files for keys with the oldPath+"/" prefix (or oldPath=="" special-case), compute new keys by replacing that prefix with newPath+"/", update each File's name (file.name) and insert into node.files under the new key, and delete the old keys; ensure exact-file renames still work and preserve behavior for non-existent paths by returning fs.ErrNotExist when no matching keys are found.
25-27: Consider documenting thread-safety guarantees.The
Nodestruct uses a plainmapwithout synchronisation. If instances are accessed concurrently from multiple goroutines, this will cause data races. If single-threaded use is expected, consider adding a doc comment clarifying thatNodeis not safe for concurrent use.📝 Suggested documentation
// Example: nodeTree := node.New() // Example: nodeTree.AddData("config/app.yaml", []byte("port: 8080")) // Example: snapshot, _ := nodeTree.ToTar() // Example: restored, _ := node.FromTar(snapshot) +// +// Node is not safe for concurrent use. If concurrent access is required, +// callers must provide their own synchronisation. type Node struct { files map[string]*dataFile }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 25 - 27, The Node struct uses a plain map (files map[string]*dataFile) which is not safe for concurrent access; either document that Node is not goroutine-safe by adding a package-level or type doc comment stating it must be used from a single goroutine, or make it safe by adding synchronization (e.g., add a mutex field like sync.RWMutex to Node and use it to guard all accesses to files in methods that read or write the map such as any constructors, getters, setters, or mutators). Ensure references to Node and files in all methods are updated to acquire the lock appropriately or that the doc comment clearly states the single-threaded expectation.
177-196:CopyFilehardcodes writes tocoreio.Local, breaking theMediumabstraction.This method copies from the in-memory node directly to the local filesystem via
coreio.Local.WriteMode(). While this may be intentional for export functionality, it's surprising API behaviour for a type that implementscoreio.Medium. Consider either:
- Renaming to
ExportFileto clarify intent, or- Accepting a target
Mediumparameter (similar toCopyTo)Additionally, line 189 is unreachable: if
node.files[sourcePath]returns false (line 181) butnode.Stat(sourcePath)succeeds (line 183), the path must be a directory (based on howStatinfers directories), soinfo.IsDir()at line 186 would always be true.♻️ Suggested simplification removing dead code
func (node *Node) CopyFile(sourcePath, destinationPath string, permissions fs.FileMode) error { sourcePath = core.TrimPrefix(sourcePath, "/") file, ok := node.files[sourcePath] if !ok { - info, err := node.Stat(sourcePath) - if err != nil { - return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist) - } - if info.IsDir() { - return core.E("node.CopyFile", core.Concat("source is a directory: ", sourcePath), fs.ErrInvalid) - } - return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist) + if node.IsDir(sourcePath) { + return core.E("node.CopyFile", core.Concat("source is a directory: ", sourcePath), fs.ErrInvalid) + } + return core.E("node.CopyFile", core.Concat("source not found: ", sourcePath), fs.ErrNotExist) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 177 - 196, The CopyFile method currently writes directly to coreio.Local via coreio.Local.WriteMode, violating the Medium abstraction and contains a dead branch after node.Stat(); update CopyFile to either be renamed to ExportFile to document local-only behavior or change its signature to accept a target coreio.Medium (like CopyTo does) and use that medium's Write/WriteMode method instead of coreio.Local.WriteMode; while doing this, remove the unreachable branch (the else case after node.Stat() where info.IsDir() is checked and the subsequent duplicate "source not found" error) and ensure you validate the source exists and is not a directory using node.files and node.Stat in a single clear control flow (referencing CopyFile, node.files, node.Stat, coreio.Local.WriteMode, and CopyTo).
191-194: Inconsistent error construction.Line 193 returns a raw
&fs.PathError{}while other methods in this file wrap errors withcore.E(). For consistency with the coding guidelines and the rest of the file, consider wrapping this error.♻️ Suggested fix for consistency
parent := core.PathDir(destinationPath) if parent != "." && parent != "" && parent != destinationPath && !coreio.Local.IsDir(parent) { - return &fs.PathError{Op: "copyfile", Path: destinationPath, Err: fs.ErrNotExist} + return core.E("node.CopyFile", core.Concat("destination parent not found: ", parent), fs.ErrNotExist) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 191 - 194, The code returns a raw &fs.PathError{...} (constructed in the block using parent := core.PathDir(destinationPath) and the !coreio.Local.IsDir(parent) check); change that to return the error wrapped with core.E(...) to match the file's error-wrapping convention (e.g., return core.E(&fs.PathError{Op: "copyfile", Path: destinationPath, Err: fs.ErrNotExist})). Ensure you keep the same PathError construction but wrap it with core.E so it is consistent with other methods in this file.
545-547: Double-wrapped error may complicate error handling.The error is wrapped with
core.E()which itself wraps a*fs.PathError. This double-wrapping could makeerrors.Is()orerrors.As()checks for*fs.PathErrorbehave unexpectedly depending on howcore.Eimplements error unwrapping.♻️ Consider using just one wrapping style
func (directory *dirFile) Read([]byte) (int, error) { - return 0, core.E("node.dirFile.Read", core.Concat("cannot read directory: ", directory.path), &fs.PathError{Op: "read", Path: directory.path, Err: fs.ErrInvalid}) + return 0, core.E("node.dirFile.Read", core.Concat("cannot read directory: ", directory.path), fs.ErrInvalid) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 545 - 547, The dirFile.Read implementation double-wraps the error (core.E wrapping a *fs.PathError) which breaks callers relying on errors.Is/As; change dirFile.Read to return the underlying *fs.PathError directly (i.e., construct and return &fs.PathError{Op: "read", Path: directory.path, Err: fs.ErrInvalid}) instead of wrapping it with core.E, or if you prefer to keep core.E, adjust the call so core.E is given the fs.PathError as the cause in a manner that preserves Unwrap semantics; locate function dirFile.Read and update its return to expose the fs.PathError unwrapped.
152-159: Consider filtering empty strings from the Split result to ensure accurate depth calculation.The depth calculation at line 155 relies on
len(core.Split(relativePath, "/")). Sincecore.Splitcan return empty strings (as evidenced by handling inlocal/medium.go), consecutive slashes inrelativePathwould artificially inflate the depth count. Althoughfs.WalkDirtypically normalises paths and tests pass with the current implementation, filtering empty strings would make the depth calculation more robust:depth := len(filter(core.Split(relativePath, "/"), func(s string) bool { return s != "" }))Alternatively, use a path depth utility that handles edge cases explicitly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 152 - 159, The depth calculation using len(core.Split(relativePath, "/")) can be inflated by empty strings from consecutive slashes; update the block inside the walk (where walkResult, options.MaxDepth, entry, entryPath and root are checked) to split relativePath with core.Split, filter out empty string elements from the resulting slice, then compute depth as the length of the filtered slice before comparing to options.MaxDepth (or replace with a path-depth helper that normalises slashes); ensure you reference the same variables (relativePath, core.Split, depth, options.MaxDepth) so the SkipDir decision uses the filtered count.store/medium.go (1)
146-164: Direct database access breaks encapsulation.
List("")directly queriesmedium.keyValueStore.databaseinstead of usingKeyValueStoremethods. This bypasses the abstraction layer and couplesMediumto the internal implementation ofKeyValueStore.Consider adding a
ListGroups() ([]string, error)method toKeyValueStoreto maintain proper encapsulation.♻️ Suggested refactor
Add to
store/store.go:// Example: groups, _ := keyValueStore.ListGroups() func (keyValueStore *KeyValueStore) ListGroups() ([]string, error) { rows, err := keyValueStore.database.Query("SELECT DISTINCT group_name FROM entries ORDER BY group_name") if err != nil { return nil, core.E("store.ListGroups", "query groups", err) } defer rows.Close() var groups []string for rows.Next() { var groupName string if err := rows.Scan(&groupName); err != nil { return nil, core.E("store.ListGroups", "scan", err) } groups = append(groups, groupName) } if err := rows.Err(); err != nil { return nil, core.E("store.ListGroups", "rows", err) } return groups, nil }Then update
Medium.List:if group == "" { - rows, err := medium.keyValueStore.database.Query("SELECT DISTINCT group_name FROM entries ORDER BY group_name") + groups, err := medium.keyValueStore.ListGroups() if err != nil { - return nil, core.E("store.List", "query groups", err) + return nil, err } - defer rows.Close() - var entries []fs.DirEntry - for rows.Next() { - var groupName string - if err := rows.Scan(&groupName); err != nil { - return nil, core.E("store.List", "scan", err) - } - entries = append(entries, &keyValueDirEntry{name: groupName, isDir: true}) - } - if err := rows.Err(); err != nil { - return nil, core.E("store.List", "rows", err) + for _, groupName := range groups { + entries = append(entries, &keyValueDirEntry{name: groupName, isDir: true}) } return entries, nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@store/medium.go` around lines 146 - 164, The Medium.List implementation directly queries medium.keyValueStore.database which breaks encapsulation; add a new method KeyValueStore.ListGroups() that performs the "SELECT DISTINCT group_name FROM entries ORDER BY group_name" query, returns []string with proper error wrapping (use core.E with operation "store.ListGroups"), and handles rows.Close/rows.Err/scan; then update Medium.List (the List method on Medium) to call keyValueStore.ListGroups(), convert the returned []string into []fs.DirEntry by constructing keyValueDirEntry{name: groupName, isDir: true}, and return that instead of accessing keyValueStore.database directly.datanode/medium.go (1)
144-146:WriteModeignores the mode parameter.Unlike
sqlite.Mediumwhich persists the mode to the database, this implementation discards the mode argument. While Borg's DataNode may not support file modes natively, this creates inconsistent behaviour across Medium implementations.Consider documenting this limitation or storing mode in a separate in-memory map if mode preservation is important for the use case.
docs/RFC-CORE-008-AGENT-EXPERIENCE.md (1)
44-50: Consider adding a language identifier to the code block.The static analysis tool flagged this code block as missing a language specification. While it's a simple list of naming examples, adding a language identifier improves rendering consistency.
📝 Suggested fix
-``` +```text Config not Cfg Service not Srv Embed not Emb Error not Err (as a subsystem name; err for local variables is fine) Options not Opts</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/RFC-CORE-008-AGENT-EXPERIENCE.mdaround lines 44 - 50, The code block
containing the naming examples (lines with "Config", "Service", "Embed",
"Error", "Options") lacks a language identifier; update that fenced code block
to include a language tag (e.g., "text") after the opening triple backticks so
the block is explicitly marked as plain text and renders consistently.</details> </blockquote></details> <details> <summary>io.go (1)</summary><blockquote> `187-196`: **`Copy` does not preserve source file mode.** The function reads content from the source and writes to the destination using default mode (`0644`). If preserving permissions is desirable, consider using `Stat` to retrieve the source mode and `WriteMode` on the destination. <details> <summary>♻️ Optional enhancement</summary> ```diff func Copy(sourceMedium Medium, sourcePath string, destinationMedium Medium, destinationPath string) error { content, err := sourceMedium.Read(sourcePath) if err != nil { return core.E("io.Copy", core.Concat("read failed: ", sourcePath), err) } - if err := destinationMedium.Write(destinationPath, content); err != nil { + info, err := sourceMedium.Stat(sourcePath) + if err != nil { + return core.E("io.Copy", core.Concat("stat failed: ", sourcePath), err) + } + if err := destinationMedium.WriteMode(destinationPath, content, info.Mode()); err != nil { return core.E("io.Copy", core.Concat("write failed: ", destinationPath), err) } return nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@io.go` around lines 187 - 196, The Copy function currently reads content via sourceMedium.Read and writes via destinationMedium.Write but doesn't preserve file permissions; call sourceMedium.Stat(sourcePath) (or equivalent) to obtain the source file's mode, then use the destination API that accepts a mode (e.g. destinationMedium.WriteMode or a Write variant that takes a FileMode) to write the content with the retrieved mode; update Copy to return the original error wrapping (core.E) when Stat fails or when the mode-aware write fails and ensure you reference the existing functions Copy, sourceMedium.Read, sourceMedium.Stat, destinationMedium.Write (or destinationMedium.WriteMode) in your changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@datanode/medium.go`:
- Around line 35-48: Add a compile-time interface compliance check to
datanode/medium.go by importing the core io package (e.g., coreio
"dappco.re/go/core/io") if missing and adding the verification line using the
Medium type (e.g., var _ coreio.Medium = (*Medium)(nil)); place the import in
the file's import block and the var check near the type definition so the
compiler ensures Medium implements the io.Medium interface.
In `@docs/RFC.md`:
- Around line 425-438: The documentation contains duplicated method entries for
MemoryMedium's Read and Write (duplicate of the sections documenting Read(path
string) (string, error) and Write(path, content string) error); remove the
repeated block that documents MemoryMedium.Read and MemoryMedium.Write (the
second copy) so only the original definitions/examples for Read and Write
remain, leaving the singular correct examples and headings for MemoryMedium.Read
and MemoryMedium.Write.
In `@io.go`:
- Around line 274-286: The parameter name path in MemoryMedium.WriteMode is
shadowing the imported path package; rename the parameter (e.g., to filePath or
p) and update all uses inside WriteMode (calls to path.Dir, checks against
medium.fileContents, ancestor variable, etc.) so that path.Dir refers to the
package, not the string parameter; ensure any other references in WriteMode that
previously used the old name are updated consistently.
- Around line 539-545: MemoryWriteCloser.Close currently writes into
medium.fileContents without checking for an existing directory at that path;
update MemoryWriteCloser.Close to perform the same directory-vs-file collision
check used elsewhere (either by calling the medium's existing write helper like
WriteMode/WriteFile or by checking the medium's directory set/map before
writing). Concretely, inside MemoryWriteCloser.Close replace the direct writes
(medium.fileContents[...]=..., medium.fileModes[...]=...,
medium.modificationTimes[...]=...) with a check that the target path is not
marked as a directory on medium (or delegate to medium.WriteMode/WriteFile which
enforces collisions); if a directory exists at that path return the appropriate
error, otherwise proceed to write and set mode and mtime as before.
- Around line 274-292: In MemoryMedium.WriteMode: avoid shadowing the parameter
by not reusing the name `path` for ancestor calculations (use `filePath`/`p` or
`ancestor := pathpkg.Dir(filePath)`), and after the ancestor collision loop add
a check that the target path is not already a directory by verifying `if _, ok
:= medium.directories[filePath]; ok { return core.E("io.MemoryMedium.WriteMode",
core.Concat("target path is a directory: ", filePath), fs.ErrExist) }`; then
proceed to ensure ancestors and write into `fileContents`, `fileModes`, and
`modificationTimes`.
In `@sigil/crypto_sigil.go`:
- Around line 184-188: ChaChaPolySigil currently exposes sensitive fields (Key
and Obfuscator) which defeats the defensive copy in the constructor; change the
struct fields to unexported names (Key -> key, Obfuscator -> obfuscator) in
ChaChaPolySigil and update every internal reference in this file from sigil.Key
to sigil.key and sigil.Obfuscator to sigil.obfuscator (leave randomReader
as-is), preserving the constructor’s defensive copy logic and any existing
methods that access these fields by updating their references accordingly.
In `@store/medium.go`:
- Around line 78-81: The WriteMode implementation on Medium currently ignores
the mode parameter; update the key-value store schema and code to persist file
modes or explicitly document the limitation. Specifically, add an entry_mode
(integer) column to the entries table (default 0o644) and modify
Medium.WriteMode to store entry_mode alongside entry_value, update Medium.Write
(or factor common save logic) to accept and persist mode, and update Medium.Stat
to read entry_mode and return it in the FileInfo result; alternatively, if you
choose not to change the schema, add clear documentation on Medium.WriteMode and
Medium.Stat that modes are not supported by this implementation. Ensure you
update any migrations, the table creation logic, and tests referencing
Medium.WriteMode/Stat to reflect the new behavior.
---
Nitpick comments:
In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md`:
- Around line 44-50: The code block containing the naming examples (lines with
"Config", "Service", "Embed", "Error", "Options") lacks a language identifier;
update that fenced code block to include a language tag (e.g., "text") after the
opening triple backticks so the block is explicitly marked as plain text and
renders consistently.
In `@io.go`:
- Around line 187-196: The Copy function currently reads content via
sourceMedium.Read and writes via destinationMedium.Write but doesn't preserve
file permissions; call sourceMedium.Stat(sourcePath) (or equivalent) to obtain
the source file's mode, then use the destination API that accepts a mode (e.g.
destinationMedium.WriteMode or a Write variant that takes a FileMode) to write
the content with the retrieved mode; update Copy to return the original error
wrapping (core.E) when Stat fails or when the mode-aware write fails and ensure
you reference the existing functions Copy, sourceMedium.Read, sourceMedium.Stat,
destinationMedium.Write (or destinationMedium.WriteMode) in your changes.
In `@node/node.go`:
- Around line 403-417: Node.Rename currently only handles exact file keys and
returns "path not found" for directories; update it to detect when oldPath is a
directory prefix and rename all affected entries. In the Node.Rename function,
after trimming prefixes and checking node.files[oldPath], if not found scan
node.files for keys with the oldPath+"/" prefix (or oldPath=="" special-case),
compute new keys by replacing that prefix with newPath+"/", update each File's
name (file.name) and insert into node.files under the new key, and delete the
old keys; ensure exact-file renames still work and preserve behavior for
non-existent paths by returning fs.ErrNotExist when no matching keys are found.
- Around line 25-27: The Node struct uses a plain map (files
map[string]*dataFile) which is not safe for concurrent access; either document
that Node is not goroutine-safe by adding a package-level or type doc comment
stating it must be used from a single goroutine, or make it safe by adding
synchronization (e.g., add a mutex field like sync.RWMutex to Node and use it to
guard all accesses to files in methods that read or write the map such as any
constructors, getters, setters, or mutators). Ensure references to Node and
files in all methods are updated to acquire the lock appropriately or that the
doc comment clearly states the single-threaded expectation.
- Around line 177-196: The CopyFile method currently writes directly to
coreio.Local via coreio.Local.WriteMode, violating the Medium abstraction and
contains a dead branch after node.Stat(); update CopyFile to either be renamed
to ExportFile to document local-only behavior or change its signature to accept
a target coreio.Medium (like CopyTo does) and use that medium's Write/WriteMode
method instead of coreio.Local.WriteMode; while doing this, remove the
unreachable branch (the else case after node.Stat() where info.IsDir() is
checked and the subsequent duplicate "source not found" error) and ensure you
validate the source exists and is not a directory using node.files and node.Stat
in a single clear control flow (referencing CopyFile, node.files, node.Stat,
coreio.Local.WriteMode, and CopyTo).
- Around line 191-194: The code returns a raw &fs.PathError{...} (constructed in
the block using parent := core.PathDir(destinationPath) and the
!coreio.Local.IsDir(parent) check); change that to return the error wrapped with
core.E(...) to match the file's error-wrapping convention (e.g., return
core.E(&fs.PathError{Op: "copyfile", Path: destinationPath, Err:
fs.ErrNotExist})). Ensure you keep the same PathError construction but wrap it
with core.E so it is consistent with other methods in this file.
- Around line 545-547: The dirFile.Read implementation double-wraps the error
(core.E wrapping a *fs.PathError) which breaks callers relying on errors.Is/As;
change dirFile.Read to return the underlying *fs.PathError directly (i.e.,
construct and return &fs.PathError{Op: "read", Path: directory.path, Err:
fs.ErrInvalid}) instead of wrapping it with core.E, or if you prefer to keep
core.E, adjust the call so core.E is given the fs.PathError as the cause in a
manner that preserves Unwrap semantics; locate function dirFile.Read and update
its return to expose the fs.PathError unwrapped.
- Around line 152-159: The depth calculation using len(core.Split(relativePath,
"/")) can be inflated by empty strings from consecutive slashes; update the
block inside the walk (where walkResult, options.MaxDepth, entry, entryPath and
root are checked) to split relativePath with core.Split, filter out empty string
elements from the resulting slice, then compute depth as the length of the
filtered slice before comparing to options.MaxDepth (or replace with a
path-depth helper that normalises slashes); ensure you reference the same
variables (relativePath, core.Split, depth, options.MaxDepth) so the SkipDir
decision uses the filtered count.
In `@sigil/crypto_sigil.go`:
- Line 212: The method receiver for ChaChaPolySigil currently uses the
identifier "sigil", which shadows the package name; rename the receiver to a
non-conflicting short name (e.g., "s" or "cps") in both the In(data []byte)
([]byte, error) and Out(...) methods so package-level symbols remain
addressable; update all uses of the receiver within those methods to the new
name and ensure method signatures on the ChaChaPolySigil type remain unchanged.
In `@store/medium.go`:
- Around line 146-164: The Medium.List implementation directly queries
medium.keyValueStore.database which breaks encapsulation; add a new method
KeyValueStore.ListGroups() that performs the "SELECT DISTINCT group_name FROM
entries ORDER BY group_name" query, returns []string with proper error wrapping
(use core.E with operation "store.ListGroups"), and handles
rows.Close/rows.Err/scan; then update Medium.List (the List method on Medium) to
call keyValueStore.ListGroups(), convert the returned []string into
[]fs.DirEntry by constructing keyValueDirEntry{name: groupName, isDir: true},
and return that instead of accessing keyValueStore.database directly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ae92950f-f592-4242-a21c-64040a44a5a7
📒 Files selected for processing (18)
datanode/medium.godatanode/medium_test.godocs/RFC-CORE-008-AGENT-EXPERIENCE.mddocs/RFC.mddocs/index.mdio.golocal/medium.golocal/medium_test.gomedium_test.gonode/node.gos3/s3.gos3/s3_test.gosigil/crypto_sigil.gosigil/sigils.gosqlite/sqlite.gostore/medium.gostore/store.goworkspace/service.go
✅ Files skipped from review due to trivial changes (3)
- local/medium_test.go
- datanode/medium_test.go
- local/medium.go
🚧 Files skipped from review as they are similar to previous changes (4)
- s3/s3_test.go
- workspace/service.go
- sigil/sigils.go
- s3/s3.go
- datanode/medium.go: add compile-time Medium interface check - docs/RFC.md: remove duplicated MemoryMedium Read/Write method entries - docs/RFC-CORE-008-AGENT-EXPERIENCE.md: add text language tag to fenced code block - io.go: rename WriteMode param path→filePath to avoid shadowing path package - io.go: add directory collision check in WriteMode and MemoryWriteCloser.Close - io.go: Copy now preserves source file permissions via Stat+WriteMode - node/node.go: add goroutine-safety doc comment on Node - node/node.go: Rename handles directory prefix batch-rename - node/node.go: rename CopyFile→ExportFile, document local-only behaviour, wrap PathError with core.E() - node/node.go: filter empty path components in Walk depth calculation - node/node_test.go: update tests to use ExportFile - sigil/crypto_sigil.go: make Key/Obfuscator unexported, add Key()/Obfuscator()/SetObfuscator() accessors - sigil/crypto_sigil.go: rename receiver sigil→s to avoid shadowing package name - sigil/crypto_sigil_test.go: update to use accessor methods - store/medium.go: use KeyValueStore.ListGroups() instead of direct DB query - store/medium.go: add doc comment that WriteMode does not persist file mode - store/store.go: add ListGroups() method to KeyValueStore Co-Authored-By: Virgil <virgil@lethean.io>
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
node/node.go (1)
247-329:⚠️ Potential issue | 🟠 MajorThe root directory should exist even when the node is empty.
Open(".")andStat(".")only succeed once at least one child has been added, andReadDir("missing")falls through to an empty listing instead of returningErrNotExist. That breaksfs.FSsemantics and makes a freshNodelook inconsistent.Suggested direction
func (node *Node) Open(name string) (fs.File, error) { name = core.TrimPrefix(name, "/") + if name == "" || name == "." { + return &dirFile{path: ".", modTime: time.Now()}, nil + } if dataFile, ok := node.files[name]; ok { return &dataFileReader{file: dataFile}, nil } @@ func (node *Node) Stat(name string) (fs.FileInfo, error) { name = core.TrimPrefix(name, "/") + if name == "" || name == "." { + return &dirInfo{name: ".", modTime: time.Now()}, nil + } if dataFile, ok := node.files[name]; ok { return dataFile.Stat() } @@ func (node *Node) ReadDir(name string) ([]fs.DirEntry, error) { name = core.TrimPrefix(name, "/") if name == "." { name = "" } - if info, err := node.Stat(name); err == nil && !info.IsDir() { + info, err := node.Stat(name) + if err != nil { + return nil, core.E("node.ReadDir", core.Concat("path not found: ", name), fs.ErrNotExist) + } + if !info.IsDir() { return nil, &fs.PathError{Op: "readdir", Path: name, Err: fs.ErrInvalid} }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node.go` around lines 247 - 329, Open and Stat currently only treat a path as an existing directory if node.files contains a child under that prefix, which makes the root (""/"." ) disappear when node.files is empty and ReadDir returns an empty list for a missing directory instead of fs.ErrNotExist; update Open, Stat and ReadDir to: always consider the root (name == "" or name == ".") as an existing directory (return dirFile from Open and dirInfo from Stat), and in ReadDir return fs.ErrNotExist (via &fs.PathError or core.E like other methods) when the requested directory has no matching entries in node.files rather than silently returning an empty slice; adjust the existing prefix checks in Open, Stat and ReadDir (use the same prefix calculation logic) and ensure ReadDir continues to return fs.ErrInvalid when the path exists but is a file.
♻️ Duplicate comments (1)
docs/RFC.md (1)
754-767:⚠️ Potential issue | 🟡 MinorRemove the duplicated
Read/Writeblocks from the generated reference.These backend sections each document the same
Read/Writemethods twice. That makes the page look like two different APIs and invites drift the next time the reference is regenerated.Also applies to: 929-944, 1241-1254, 1429-1442, 1624-1638, 1840-1853
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC.md` around lines 754 - 767, The generated reference contains duplicated documentation blocks for the Read(path string) (string, error) and Write(path, content string) error methods (examples showing local.New and using "notes.txt") — remove the duplicate occurrences so each backend documents Read and Write only once; locate the repeated blocks that show the same examples for Read/Write and delete the redundant copies, and if the duplication comes from the doc generator, fix the generator logic to emit a single Read/Write section per backend (ensure symbols Read, Write and constructor local.New remain present exactly once with their example).
🧹 Nitpick comments (1)
node/node_test.go (1)
329-338: Keep the failing export path inside the test sandbox.
/nonexistent_dir/test.txtmakes this assertion depend on the host layout. A missing child undert.TempDir()exercises the same branch without leaving the test sandbox.Possible tidy-up
- err = nodeTree.ExportFile("foo.txt", "/nonexistent_dir/test.txt", 0644) + missingDestination := core.Path(t.TempDir(), "missing", "test.txt") + err = nodeTree.ExportFile("foo.txt", missingDestination, 0644) assert.Error(t, err)Based on learnings: "Use
io.MockMediumorio.NewSandboxed(t.TempDir())in tests — never hit real S3/SQLite unless integration testing."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@node/node_test.go` around lines 329 - 338, TestNode_ExportFile_Bad uses an absolute path "/nonexistent_dir/test.txt" which depends on host layout; change the second failing export to use a path inside the test sandbox (t.TempDir()) so the test remains hermetic. Locate TestNode_ExportFile_Bad and replace the hardcoded "/nonexistent_dir/test.txt" with a path built from t.TempDir() (e.g., core.Path(t.TempDir(), "nonexistent_dir", "test.txt") or use io.NewSandboxed(t.TempDir() / io.MockMedium) ) when calling nodeTree.ExportFile("foo.txt", ... , 0644) so the failure branch is exercised inside the test sandbox.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@datanode/medium.go`:
- Around line 56-64: FromTar is restoring medium.dataNode but leaves
Medium.directorySet empty, causing synthetic directories created via EnsureDir
to be lost; update Snapshot (the serialization path) to persist directorySet (so
empty dirs are round-trippable) and update both restore paths (FromTar and the
other restore function handling 68-88) to rebuild directorySet: on restore, if
serialized directorySet exists, load it back into Medium.directorySet, otherwise
reconstruct directorySet by iterating restored file entries from dataNode (using
Medium.dataNode's file/path iterator) and adding parent directories for each
file; ensure the code references Medium.directorySet, FromTar, Snapshot,
EnsureDir and borgdatanode.FromTar so both explicit empty directories and
inferred parents are preserved after restore.
In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md`:
- Around line 79-85: Add the language tag "text" to the unlabeled fenced code
block that contains the path/tree snippet starting with
"flow/deploy/to/homelab.yaml — deploy TO the homelab" (and the other similar
blocks), e.g., change ``` to ```text for that block and the other blocks
mentioned (the ones containing the path lists around lines 181-186, 211-218, and
230-244) so markdownlint MD040 is satisfied and rendering stays consistent.
In `@node/node.go`:
- Around line 210-243: The CopyTo method treats "." as a non-empty prefix and
skips files; fix it by normalizing sourcePath to the root when it equals "."
before building the prefix—i.e., in Node.CopyTo (after sourcePath =
core.TrimPrefix(sourcePath, "/") and before calling node.Stat or before prefix
construction) convert a sourcePath of "." to "" so the subsequent prefix logic
and the node.files iteration correctly include root files and subpaths.
- Around line 342-350: Reject empty/"." and detect file/directory collisions
before committing writes: in Node.Write (and likewise Node.WriteMode and the
nodeWriter.Close/Node.AddData commit path) validate and normalize the incoming
filePath (use path.Clean and reject "" or "."), then scan the in-memory map of
entries (the same map AddData writes to) and if any existingKey == filePath or
strings.HasPrefix(existingKey, filePath+"/") or strings.HasPrefix(filePath,
existingKey+"/") return an error instead of committing; ensure nodeWriter.Close
performs the same pre-commit checks so a write cannot shadow an implicit
directory or vice-versa.
In `@store/medium.go`:
- Around line 128-141: The Rename method currently performs a Set then Delete
even when oldPath and newPath resolve to the same group/key; update
Medium.Rename to detect no-op renames by comparing oldGroup==newGroup &&
oldKey==newKey after calling splitGroupKeyPath and return nil immediately if
identical, so you don't call keyValueStore.Set/Delete unnecessarily; keep
existing validation (oldKey/newKey non-empty) and otherwise preserve the
Get/Set/Delete flow using keyValueStore.Get, Set and Delete.
- Around line 224-230: In Append, stop ignoring errors from
medium.keyValueStore.Get: capture the returned error (e.g., existingValue, err
:= medium.keyValueStore.Get(group, key)) and if err != nil and it's not a "not
found" condition (use the project's helper like core.IsNotFoundError(err) or
compare to the NotFound error type) return nil, err so we don't overwrite on
unexpected read failures; only treat the not-found case as starting from empty
and then construct the keyValueWriteCloser as before. Also add the required
import for errors/helpers if needed.
In `@store/store.go`:
- Around line 44-53: The constructor New for KeyValueStore creates the new
entries table but does not migrate rows from the legacy kv(grp,key,value) table,
causing upgrades to appear empty; modify New (after the CREATE TABLE IF NOT
EXISTS for entries) to start a transaction (use database.Begin), detect whether
a legacy table named "kv" exists via sqlite_master, and if present INSERT INTO
entries (group_name, entry_key, entry_value) SELECT grp, key, value FROM kv with
an ON CONFLICT DO NOTHING clause, then commit the transaction and return errors
using core.E on any failure (use tx.Rollback() in defer), ensuring the migration
runs inside the same transaction before returning &KeyValueStore{database:
database}.
- Around line 13-33: The refactor changed public API names causing breaking
changes; add compatibility shims so existing callers still compile: export an
alias ErrNotFound that equals NotFoundError, a type alias Store = KeyValueStore,
and a wrapper New(path string) function that calls New(Options{Path: path}) and
returns the same (*KeyValueStore, error) to preserve Old New(string) semantics
(mark as deprecated in a comment). Place these aliases/wrapper in the same
package alongside NotFoundError, KeyValueStore, and New(Options) so callers
using ErrNotFound, Store, or New(string) continue to work for one release.
---
Outside diff comments:
In `@node/node.go`:
- Around line 247-329: Open and Stat currently only treat a path as an existing
directory if node.files contains a child under that prefix, which makes the root
(""/"." ) disappear when node.files is empty and ReadDir returns an empty list
for a missing directory instead of fs.ErrNotExist; update Open, Stat and ReadDir
to: always consider the root (name == "" or name == ".") as an existing
directory (return dirFile from Open and dirInfo from Stat), and in ReadDir
return fs.ErrNotExist (via &fs.PathError or core.E like other methods) when the
requested directory has no matching entries in node.files rather than silently
returning an empty slice; adjust the existing prefix checks in Open, Stat and
ReadDir (use the same prefix calculation logic) and ensure ReadDir continues to
return fs.ErrInvalid when the path exists but is a file.
---
Duplicate comments:
In `@docs/RFC.md`:
- Around line 754-767: The generated reference contains duplicated documentation
blocks for the Read(path string) (string, error) and Write(path, content string)
error methods (examples showing local.New and using "notes.txt") — remove the
duplicate occurrences so each backend documents Read and Write only once; locate
the repeated blocks that show the same examples for Read/Write and delete the
redundant copies, and if the duplication comes from the doc generator, fix the
generator logic to emit a single Read/Write section per backend (ensure symbols
Read, Write and constructor local.New remain present exactly once with their
example).
---
Nitpick comments:
In `@node/node_test.go`:
- Around line 329-338: TestNode_ExportFile_Bad uses an absolute path
"/nonexistent_dir/test.txt" which depends on host layout; change the second
failing export to use a path inside the test sandbox (t.TempDir()) so the test
remains hermetic. Locate TestNode_ExportFile_Bad and replace the hardcoded
"/nonexistent_dir/test.txt" with a path built from t.TempDir() (e.g.,
core.Path(t.TempDir(), "nonexistent_dir", "test.txt") or use
io.NewSandboxed(t.TempDir() / io.MockMedium) ) when calling
nodeTree.ExportFile("foo.txt", ... , 0644) so the failure branch is exercised
inside the test sandbox.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 39dd6e27-eeae-4e7c-9129-62876c5b0ea4
📒 Files selected for processing (13)
datanode/medium.godocs/RFC-CORE-008-AGENT-EXPERIENCE.mddocs/RFC.mddocs/api-contract.mddocs/architecture.mddocs/security-attack-vector-mapping.mdio.gonode/node.gonode/node_test.gosigil/crypto_sigil.gosigil/crypto_sigil_test.gostore/medium.gostore/store.go
✅ Files skipped from review due to trivial changes (1)
- sigil/crypto_sigil_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- io.go
- docs/architecture.md
- datanode: FromTar and Restore now rebuild directorySet from restored file
entries so EnsureDir directories survive snapshot round-trips
- node: CopyTo normalises "." sourcePath to "" so root files are included
- node: Open and Stat always return a directory entry for the root ("")
even when the node is empty; ReadDir returns fs.ErrNotExist for missing
non-root directories instead of an empty slice
- node: Write and nodeWriter.Close reject empty/"." paths with ErrInvalid
- store: Rename returns nil immediately when oldPath == newPath (no-op)
- store: Append captures Get errors and only treats NotFoundError as an
empty starting value; unexpected read failures are now surfaced
- docs: add "text" language tag to unlabelled fenced code blocks in
RFC-CORE-008-AGENT-EXPERIENCE.md (sections 3, 7, 9, 10)
- docs: remove duplicate Read/Write documentation blocks from RFC.md
(local package section)
- test: replace hardcoded /nonexistent_dir path in TestNode_ExportFile_Bad
with a hermetic path under t.TempDir()
Skipped findings 7 & 8 (kv migration and compatibility shims) — both were
intentionally removed in f0b828a (drop legacy compatibility shims) and
re-adding them would conflict with the AX refactor.
Co-Authored-By: Virgil <virgil@lethean.io>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
docs/RFC.md (1)
914-929:⚠️ Potential issue | 🟡 MinorRemove duplicated
Read/Writemethod entries in backend sections.The API reference repeats identical
Read/Writesignatures in several sections (for example at Line 914-929, Line 1226-1239, Line 1414-1427, Line 1609-1623, and Line 1825-1838). This makes the generated docs look inconsistent and harder to trust.Also applies to: 1226-1239, 1414-1427, 1609-1623, 1825-1838
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC.md` around lines 914 - 929, Remove the duplicated Read and Write method entries in each backend section so the API reference shows a single canonical signature; search for the duplicated method docs for Read(p string) (string, error) and Write(p, content string) error (examples using n := node.New(), n.Read(...) and n.Write(...)) and keep only the primary occurrence while deleting the repeated blocks at the other sections (the duplicates around the lines referencing the backend sections). Ensure the remaining entry includes the correct signature, brief alias note if needed, and one example; update surrounding headings/anchors if necessary so links still resolve to the single authoritative method entry.docs/RFC-CORE-008-AGENT-EXPERIENCE.md (1)
196-202:⚠️ Potential issue | 🟡 MinorAdd language identifiers to the remaining fenced blocks.
markdownlintMD040 is still triggered at Line 196, Line 269, Line 329, Line 372, and Line 396. Please label these fences (for example,text) to keep lint and rendering consistent.🛠 Suggested patch
-``` +```text code/core/go/* → lib tier (stable foundation) code/core/agent/ → consumer tier (composes from go/*) code/core/cli/ → consumer tier (composes from go/*) code/core/gui/ → consumer tier (composes from go/*)-
+textAX-native: path describes content
core/agent/
├── go/ # Go source
├── php/ # PHP source
├── ui/ # Frontend source
├── claude/ # Claude Code plugin
└── codex/ # Codex pluginNot AX: generic names requiring README
src/
├── lib/
├── utils/
└── helpers/-``` +```text plans/ ├── project/ # 1. WHAT and WHY — start here ├── rfc/ # 2. CONSTRAINTS — immutable contracts └── code/ # 3. HOW — implementation specs-
+text
- REVIEW — The implementation model (Codex/Jules) reads the spec
and reports missing elements. This surfaces the delta between
the model's training and the spec's assumptions.
@@- READY — When the review findings are trivial or deployment-
related (not architectural), the spec is ready to dispatch.-``` +```text Round 1: Build features (the obvious gaps) Round 2: Write tests (verify what was built) Round 3: Harden security (what can go wrong?) Round 4: Next RFC section (what's still missing?) Round N: Findings are trivial → implementation is complete</details> Also applies to: 269-284, 329-334, 372-390, 396-402 <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/RFC-CORE-008-AGENT-EXPERIENCE.mdaround lines 196 - 202, Several fenced
code blocks in the RFC are unlabeled causing markdownlint MD040; update each
triple-backtick fence that wraps the shown snippets (the "code/core/go/* → lib
tier..." block, the "# AX-native: path describes content" block, the "plans/"
block, the numbered "1. REVIEW … 4. READY" block, and the "Round 1/2/3..."
block) to specify a language identifier (use "text"). Locate the exact fences
surrounding those snippets in RFC-CORE-008-AGENT-EXPERIENCE.md and change ``` to
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@store/medium.go`:
- Around line 163-165: The List method in medium.go currently returns (nil, nil)
when key != "" which hides misuse; change it to return a typed error instead
(e.g., define and use ErrNotDirectory or ErrInvalidListPath in the same package)
so callers can distinguish "not a directory" from an empty result; update the
List function to return nil, ErrNotDirectory when key != "" and ensure the new
error symbol is exported or documented for callers to check.
---
Duplicate comments:
In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md`:
- Around line 196-202: Several fenced code blocks in the RFC are unlabeled
causing markdownlint MD040; update each triple-backtick fence that wraps the
shown snippets (the "code/core/go/* → lib tier..." block, the "# AX-native: path
describes content" block, the "plans/" block, the numbered "1. REVIEW … 4.
READY" block, and the "Round 1/2/3..." block) to specify a language identifier
(use "text"). Locate the exact fences surrounding those snippets in
RFC-CORE-008-AGENT-EXPERIENCE.md and change ``` to ```text for each occurrence
so linting and rendering are consistent.
In `@docs/RFC.md`:
- Around line 914-929: Remove the duplicated Read and Write method entries in
each backend section so the API reference shows a single canonical signature;
search for the duplicated method docs for Read(p string) (string, error) and
Write(p, content string) error (examples using n := node.New(), n.Read(...) and
n.Write(...)) and keep only the primary occurrence while deleting the repeated
blocks at the other sections (the duplicates around the lines referencing the
backend sections). Ensure the remaining entry includes the correct signature,
brief alias note if needed, and one example; update surrounding headings/anchors
if necessary so links still resolve to the single authoritative method entry.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3c689cf3-2bb1-46e4-9054-2672fbcf635e
📒 Files selected for processing (6)
datanode/medium.godocs/RFC-CORE-008-AGENT-EXPERIENCE.mddocs/RFC.mdnode/node.gonode/node_test.gostore/medium.go
🚧 Files skipped from review as they are similar to previous changes (3)
- node/node_test.go
- datanode/medium.go
- node/node.go
…hod entries - store/medium.go: define and export ErrNotDirectory using core.E(); return it from List when key != "" instead of silent (nil, nil) - docs/RFC-CORE-008-AGENT-EXPERIENCE.md: add `text` language tag to all five unlabeled fenced code blocks (MD040) - docs/RFC.md: remove duplicate Read/Write method entries that appeared after IsFile in the node, store, sqlite, s3, and datanode backend sections Co-Authored-By: Virgil <virgil@lethean.io>
There was a problem hiding this comment.
🧹 Nitpick comments (3)
docs/RFC.md (1)
6-2391: Consider splitting this full API reference into a separate docs PR.This is a substantial documentation expansion that goes beyond a focused module-path migration. Keeping migration-only changes isolated will make verification for consumers and regressions much easier.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC.md` around lines 6 - 2391, Summary: This PR includes a massive "API Reference" RFC that should be split into a separate documentation PR. Fix: remove or revert the large API Reference content (the "API Reference" header and the long package/type/function listings such as the Package io sections and symbols like NewMemoryMedium, NewSandboxed, Node.AddData, store.New, sqlite.New, s3.New, sigil.NewChaChaPolySigil, etc.) from this migration PR and instead create a new docs-only PR containing that RFC; keep this PR limited to the migration changes only and, if needed, add a short pointer comment or TODO in this PR referencing the new docs PR/commit.docs/RFC-CORE-008-AGENT-EXPERIENCE.md (2)
142-175: Examples don't demonstrate the statedcore.Result[T]pattern.Lines 167-175 claim that
core.Result[T]replaces theval, err :=pattern and that errors are handled internally by Core, but the code examples above (lines 144, 145, 146, 149) show function calls without capturing or demonstrating return values. Ifcore.Result[T]is the universal return pattern, the examples should show how it's used.📚 Proposed enhancement to demonstrate Result[T] usage
Add a concrete example showing the
core.Result[T]pattern:// Universal contract -setup.Run(core.Options{Path: ".", Template: "auto"}) -brain.New(core.Options{Name: "openbrain"}) -deploy.Run(core.Options{Flow: "deploy/to/homelab"}) +result := setup.Run(core.Options{Path: ".", Template: "auto"}) +// No error checking needed - Core handles errors internally + +service := brain.New(core.Options{Name: "openbrain"}) +if service.OK() { + // Use service +}Or clarify in the table that constructors like
New()return the service directly while operations likeRun()returnResult[T].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md` around lines 142 - 175, Update the examples to actually demonstrate the core.Result[T] return pattern (or explicitly state which functions return raw services vs Results); for the calls shown (setup.Run, brain.New, deploy.Run, core.New and any factory/Run functions in those snippets) either show capturing the Result (e.g. r := setup.Run(...), handle r.Ok/r.Err) or add a clarifying note that constructors like brain.New/core.New return Service directly while operations like Run/Deploy return core.Result[T]; ensure the doc references core.Result[T] in the example so readers see the intended single-return error flow.
222-222: Consider hyphenating compound adjective for consistency.Line 222 uses "mid model" whilst line 224 uses "one-shot" (hyphenated). For consistency with compound adjectives used as modifiers, consider "mid-tier model" or "mid-level model".
📝 Proposed style adjustment
-**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes. +**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid-tier model pre-warms → frontier model polishes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md` at line 222, The phrase "**Corollary:** The cheapest model should do the most passes (surface work). The frontier model should arrive last, when only deep issues remain. Tiered iteration: grunt model grinds → mid model pre-warms → frontier model polishes." uses "mid model" as a compound adjective inconsistently with "one-shot" elsewhere; change "mid model" to a hyphenated form such as "mid-tier model" or "mid-level model" so it reads "grunt model grinds → mid-tier model pre-warms → frontier model polishes" (apply the hyphenated compound adjective style consistently in this sentence).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@docs/RFC-CORE-008-AGENT-EXPERIENCE.md`:
- Around line 142-175: Update the examples to actually demonstrate the
core.Result[T] return pattern (or explicitly state which functions return raw
services vs Results); for the calls shown (setup.Run, brain.New, deploy.Run,
core.New and any factory/Run functions in those snippets) either show capturing
the Result (e.g. r := setup.Run(...), handle r.Ok/r.Err) or add a clarifying
note that constructors like brain.New/core.New return Service directly while
operations like Run/Deploy return core.Result[T]; ensure the doc references
core.Result[T] in the example so readers see the intended single-return error
flow.
- Line 222: The phrase "**Corollary:** The cheapest model should do the most
passes (surface work). The frontier model should arrive last, when only deep
issues remain. Tiered iteration: grunt model grinds → mid model pre-warms →
frontier model polishes." uses "mid model" as a compound adjective
inconsistently with "one-shot" elsewhere; change "mid model" to a hyphenated
form such as "mid-tier model" or "mid-level model" so it reads "grunt model
grinds → mid-tier model pre-warms → frontier model polishes" (apply the
hyphenated compound adjective style consistently in this sentence).
In `@docs/RFC.md`:
- Around line 6-2391: Summary: This PR includes a massive "API Reference" RFC
that should be split into a separate documentation PR. Fix: remove or revert the
large API Reference content (the "API Reference" header and the long
package/type/function listings such as the Package io sections and symbols like
NewMemoryMedium, NewSandboxed, Node.AddData, store.New, sqlite.New, s3.New,
sigil.NewChaChaPolySigil, etc.) from this migration PR and instead create a new
docs-only PR containing that RFC; keep this PR limited to the migration changes
only and, if needed, add a short pointer comment or TODO in this PR referencing
the new docs PR/commit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4fb88216-4fae-4b24-9753-4a7f20084309
📒 Files selected for processing (3)
docs/RFC-CORE-008-AGENT-EXPERIENCE.mddocs/RFC.mdstore/medium.go
🚧 Files skipped from review as they are similar to previous changes (1)
- store/medium.go
Module path migration from forge.lthn.ai to dappco.re
Summary by CodeRabbit
Breaking Changes
New Features
Documentation
Tests