feat: implement SQLite Adapter#5
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR introduces a complete SQLite adapter implementation with comprehensive test coverage, and updates the core Adapter interface to support JSON-path filtering through a new optional Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
src/adapters/sqlite.test.ts (1)
408-432: Consider adding coverage for multi-field cursors and JSON-path sort/select.Current pagination tests only exercise a single sort field (
age). GivenFieldName<T>advertises->>paths and the cursor can carry multiple keys, tests that fail today would help surface the gaps in the adapter (multi-field keyset ordering and JSON-path handling insortBy/select/cursor). Tying these to the bugs flagged insqlite.tswould lock in the fixes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adapters/sqlite.test.ts` around lines 408 - 432, Add tests that exercise multi-field keyset pagination and JSON-path sort/select to cover gaps in the adapter: extend the existing adapter.findMany<User> tests (the "should handle cursor pagination ascending/descending" cases) to include sortBy with multiple fields (e.g., [{ field: "age", direction: "asc" }, { field: "id", direction: "asc" }]) and cursor.after carrying multiple keys (e.g., { age: X, id: Y }), and add cases where sort/select use JSON-path style FieldName (->> paths) to ensure sqlite.ts code handling sortBy, select and cursor extraction supports JSON-paths and multi-key comparisons; update or add expectations for ordering and lengths to validate correct multi-field keyset behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adapters/sqlite.test.ts`:
- Around line 358-360: Update the misleading inline comment near the assertions
that check outer1?.name and outer2 to explain that the inner transaction's
UPDATE was discarded by the rollback to the savepoint (not "reverted"
afterwards); locate the assertion using outer1 and outer2 in the test and
replace "Reverted the update from inner tx" with wording like "Inner
transaction's UPDATE was discarded by rollback to savepoint (never applied to
outer scope)" so it clearly states the inner tx effects were not committed.
In `@src/adapters/sqlite.ts`:
- Around line 238-240: The update currently issues UPDATE ... WHERE ... then
calls this.find({ model, where }) which can return null if the update changed a
column used in the original where predicate; modify the update logic in the
update method to refetch by the record's primary key instead: either derive the
PK from data if it includes the primary key, or perform a pre-update read (using
the same model/where) to capture the primary key before running await
this.db.run(`UPDATE ${this.quote(model)} SET ${setClause} WHERE ${whereSql}`,
params), then call this.find({ model, where: { [primaryKeyName]: primaryKeyValue
} }) to reliably return the updated record (refer to symbols update, this.find,
quote(model), setClause, whereSql, params).
- Around line 142-152: The create method currently returns the caller-supplied
data when select is omitted, which bypasses DB-side defaults/coercions; update
create to always refetch the persisted row from the DB (use this.find) after
insertion using the new record's primary key (use model PK info to build the
where clause) and return that fetched result instead of the original input;
ensure the code path that currently does "if (result === null) throw ..."
remains but trigger the find by PK regardless of whether select was provided so
returned object reflects actual persisted state.
- Around line 445-456: The current multi-field cursor predicate builds a simple
AND chain using cursorDirection derived from sortBy[0], which is incorrect for
lexicographic keyset pagination; update the logic in the cursor handling block
(where cursor, cursorDirection, this.quote, mapWhereValue, parts, and params are
used) to generate a lexicographic OR-chain: for n cursor fields produce clauses
like (f1 > ? ) OR (f1 = ? AND f2 > ? ) OR ... (or use a row-value comparison if
preferred), pushing appropriate mapped params for each clause, and ensure you
respect each field's individual sort direction from sortBy (not just sortBy[0])
when choosing '>' vs '<' for that field.
- Around line 499-509: The JSON path construction in the block handling
field.includes("->>") builds jsonPath via `'$.${parts.slice(1).join(".")}'`,
which can be injected if a segment contains a quote or invalid characters; in
the function where quotedField is built (look for variables: field, parts,
topLevelColumn, jsonPath, this.quote(topLevelColumn), and json_extract),
validate each segment parts.slice(1) against /^[A-Za-z_][A-Za-z0-9_]*$/ and
throw an Error for invalid segments, and/or escape single quotes by replacing
any "'" with "''" before joining so the final string passed into json_extract is
safe. Ensure the topLevelColumn continues to use this.quote(topLevelColumn) and
return the sanitized quotedField.
- Around line 384-396: The transaction method uses SAVEPOINTs on a shared
connection which allows concurrent top-level adapter.transaction(...) calls to
interleave and corrupt each other; fix by serializing top-level transactions
(e.g., add an internal mutex/queue like this.txMutex and acquire/release it
around the SAVEPOINT/RELEASE/ROLLBACK block in transaction so only one top-level
transaction runs at a time while nested calls still use savepoints), replace the
ad-hoc sp generation with a monotonic counter (e.g., this._spCounter++) to
guarantee uniqueness, and quote the savepoint identifier when executing
`SAVEPOINT`, `RELEASE SAVEPOINT`, and `ROLLBACK TO SAVEPOINT` (use a safe quoted
form like double-quotes) so identifiers are safe against refactors.
- Around line 325-382: The upsert implementation in upsert(...) ignores the
passed where clause and always keys ON CONFLICT/refetch on
modelSpec.primaryKey.fields and uses create[f] for PK refetch, which is unsafe;
change behavior to enforce that args.where is a simple set of equality
conditions covering every PK field (throw if not) and then use those equality
values (not create or update) when building the conflict/refetch key and
pkValuesWhere for the subsequent find call; also disallow or ignore PK fields in
update (or throw) so an update cannot change the PK used for refetch; update
references: upsert, args.where, modelSpec.primaryKey.fields, ON CONFLICT(...),
pkValuesWhere, mappedCreate, mappedUpdate, and find.
- Around line 195-200: The ORDER BY, buildSelect, and cursor construction use
this.quote(...) which incorrectly wraps JSON-path field names like
"metadata->>theme"; add a helper method (e.g., quoteField(field: FieldName<T>))
that detects JSON path syntax (-> and ->>) and rewrites it into
json_extract(...) expressions with proper quoted identifiers and unwrapping for
->> while leaving plain identifiers quoted via this.quote, then replace calls to
this.quote(s.field)/this.quote(f) in the sortBy mapper (ORDER BY builder),
buildSelect, and the cursor loop with this.quoteField(...) so JSON paths are
handled consistently (also keep buildWhereRecursive unchanged which already
handles JSON paths).
---
Nitpick comments:
In `@src/adapters/sqlite.test.ts`:
- Around line 408-432: Add tests that exercise multi-field keyset pagination and
JSON-path sort/select to cover gaps in the adapter: extend the existing
adapter.findMany<User> tests (the "should handle cursor pagination
ascending/descending" cases) to include sortBy with multiple fields (e.g., [{
field: "age", direction: "asc" }, { field: "id", direction: "asc" }]) and
cursor.after carrying multiple keys (e.g., { age: X, id: Y }), and add cases
where sort/select use JSON-path style FieldName (->> paths) to ensure sqlite.ts
code handling sortBy, select and cursor extraction supports JSON-paths and
multi-key comparisons; update or add expectations for ordering and lengths to
validate correct multi-field keyset behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a0fe2be9-040b-4d5a-b06f-85fdb56a57b8
📒 Files selected for processing (4)
README.mdsrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/core.ts
| async transaction<T>(fn: (tx: Adapter) => Promise<T>): Promise<T> { | ||
| const sp = `sp_${Date.now()}_${Math.floor(Math.random() * 100000)}`; | ||
|
|
||
| await this.db.run(`SAVEPOINT ${sp}`, []); | ||
| try { | ||
| const result = await fn(this); | ||
| await this.db.run(`RELEASE SAVEPOINT ${sp}`, []); | ||
| return result; | ||
| } catch (error) { | ||
| await this.db.run(`ROLLBACK TO SAVEPOINT ${sp}`, []); | ||
| throw error; | ||
| } | ||
| } |
There was a problem hiding this comment.
Transaction uses savepoints on the shared connection; concurrent top-level transaction() calls will interleave.
fn receives this (the adapter), so nested calls correctly use stacked SAVEPOINTs. However, two top-level await adapter.transaction(...) calls started in parallel will both open savepoints on the same underlying SqliteDatabase and interleave their statements, and an early ROLLBACK TO SAVEPOINT from one will undo the other's work within its range. Two minor related nits:
sp_${Date.now()}_${Math.floor(Math.random() * 100000)}is not guaranteed unique; prefer a monotonic counter.- SQLite identifiers for savepoints should be quoted to be safe against future refactors.
Consider documenting the single-in-flight-transaction assumption, or serializing transactions through an internal mutex.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/adapters/sqlite.ts` around lines 384 - 396, The transaction method uses
SAVEPOINTs on a shared connection which allows concurrent top-level
adapter.transaction(...) calls to interleave and corrupt each other; fix by
serializing top-level transactions (e.g., add an internal mutex/queue like
this.txMutex and acquire/release it around the SAVEPOINT/RELEASE/ROLLBACK block
in transaction so only one top-level transaction runs at a time while nested
calls still use savepoints), replace the ad-hoc sp generation with a monotonic
counter (e.g., this._spCounter++) to guarantee uniqueness, and quote the
savepoint identifier when executing `SAVEPOINT`, `RELEASE SAVEPOINT`, and
`ROLLBACK TO SAVEPOINT` (use a safe quoted form like double-quotes) so
identifiers are safe against refactors.
There was a problem hiding this comment.
🧩 Analysis chain
Let me check the current state of the code on the PR branch.
🏁 Script executed:
#!/bin/bash
# Check the current state of the transaction method in sqlite.ts
rg -n "transaction|savepoint|SAVEPOINT|sp_|_spCounter|txMutex|mutex" src/adapters/sqlite.ts -i -A 2 -B 2
Length of output: 1236
Let me get more context on both the transaction implementations and README documentation.
🏁 Script executed:
#!/bin/bash
# Check the full transaction implementations
rg -n "transaction|BEGIN|COMMIT|ROLLBACK|inTransaction" src/adapters/sqlite.ts -A 3 -B 1
echo "---README---"
grep -n -i "limitation\|transaction\|concurrent" README.md -A 3 -B 1
Length of output: 2096
🏁 Script executed:
#!/bin/bash
# Get the full Limitations / Notes section from README
sed -n '/^## Notes/,/^## /p' README.md
echo "---"
# Also check if there's a Limitations section
sed -n '/^## Limitations/,/^## /p' README.md
Length of output: 501
🏁 Script executed:
#!/bin/bash
# Get the full Transactions section and surrounding context
sed -n '/^## Transactions/,/^## /p' README.md
Length of output: 402
@heiwen — Partially. The code has been substantially refactored since the original comment, so two of the three concerns are resolved, but one remains:
| Original concern | Status |
|---|---|
Non-unique sp_${Date.now()} savepoint names |
✅ Resolved — SAVEPOINTs are no longer used |
| Unquoted savepoint identifiers | ✅ Resolved — SAVEPOINTs are no longer used |
Concurrent top-level transaction() calls interleaving on the shared connection |
Remaining concern (async driver)
The refactored code now correctly uses BEGIN/COMMIT/ROLLBACK and the inTransaction flag prevents double-BEGIN on nested calls. For the sync driver (e.g. Bun's bun:sqlite), interleaving is impossible because JS is single-threaded and all driver calls are synchronous.
However, for the async SqliteDatabase driver, two concurrent top-level calls can still interleave:
// These run concurrently — both issue BEGIN, then queries interleave on the same connection
await Promise.all([
adapter.transaction(async (tx) => { await tx.create(...); await tx.update(...); }),
adapter.transaction(async (tx) => { await tx.create(...); await tx.update(...); }),
]);Minor README inaccuracy
The README currently states "SQLite and Postgres support nested transactions via savepoints", but the current SQLite implementation does not use savepoints for nesting — nested calls simply reuse the outer adapter via fn(this). This line is inaccurate and could be misleading.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/adapters/sqlite.test.ts (1)
372-432: PaginationbeforeEachis declared after the first test, masking shared seed data.The
beforeEachat Line 420 is registered for the wholePaginationdescribe (including themulti-field keyset paginationtest above it), so that test actually runs against 10 rows (p1-p5+m1-m5). It currently passes because the cursor{age: 30, name: "B"}happens to exclude allp*rows, but the test reads as if onlym1-m5exist and is fragile to future cursor changes.Suggestion: move the
beforeEachhook to the top of thePaginationdescribe, and either isolate the multi-field test into its owndescribewithout the shared seed or extend the inline commentary to document thatp1-p5are also in-scope.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/adapters/sqlite.test.ts` around lines 372 - 432, The test "should handle multi-field keyset pagination correctly" runs before the file-scoped beforeEach that seeds p1-p5, causing it to see extra rows; either move the existing beforeEach (the Promise.all seeding loop) to the top of the Pagination describe so it runs for all tests consistently, or wrap this multi-field test in its own describe block (or add a dedicated beforeEach inside that describe) to isolate seeding for m1-m5; update/remove comments to document which seed set applies accordingly and ensure references to adapter.create remain unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/adapters/sqlite.ts`:
- Around line 602-623: Handle empty arrays in the "in" and "not_in" branches:
detect when list.length === 0 after building list (in the case "in" and case
"not_in" blocks that call this.mapWhereValue and use quotedField), and
short-circuit to a safe always-false or always-true predicate instead of
emitting IN ()/NOT IN (); specifically return an SQL fragment like "1=0" (no
params) for empty "in" and "1=1" (no params) for empty "not_in" so
mapWhereValue, params array, and quotedField usage remain correct and no invalid
SQL is produced.
---
Nitpick comments:
In `@src/adapters/sqlite.test.ts`:
- Around line 372-432: The test "should handle multi-field keyset pagination
correctly" runs before the file-scoped beforeEach that seeds p1-p5, causing it
to see extra rows; either move the existing beforeEach (the Promise.all seeding
loop) to the top of the Pagination describe so it runs for all tests
consistently, or wrap this multi-field test in its own describe block (or add a
dedicated beforeEach inside that describe) to isolate seeding for m1-m5;
update/remove comments to document which seed set applies accordingly and ensure
references to adapter.create remain unchanged.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6b46ff4b-a7a0-4534-9976-668446b44f28
📒 Files selected for processing (4)
README.mdsrc/adapters/sqlite.test.tssrc/adapters/sqlite.tssrc/core.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
- Replace unsafe as FieldName<T> with 'in' checks and keyof T indexed accesses in getIdentityValues - Add // eslint-disable-next-line comments for remaining necessary assertions in buildIdentityFilter - Fix overly broad cast in postgres.ts from 'as Record<string, unknown>[]' to 'as Array<Record<string, unknown>>' Co-authored-by: opencode <opencode@anomaly.co>
- Replace unsafe as FieldName<T> with 'in' checks and keyof T indexed accesses in getIdentityValues - Add // eslint-disable-next-line comments for remaining necessary assertions in buildIdentityFilter - Fix overly broad cast in postgres.ts from 'as Record<string, unknown>[]' to 'as Array<Record<string, unknown>>' Co-authored-by: opencode <opencode@anomaly.co>
…into no-orm-sqlite-adapter
| model: "conversations", | ||
| ```ts | ||
| // Create | ||
| const created = await adapter.create<"users", User>({ |
There was a problem hiding this comment.
Having both "users" and User on the generic feels redundant. Can this be reduced to just User? Especially since the adapter is already typed by schema? (applies to all below)
| import { SqliteAdapter } from "@8monkey/no-orm/adapters/sqlite"; | ||
|
|
||
| const db = new Database(":memory:"); | ||
| const adapter = new SqliteAdapter(schema, db); |
There was a problem hiding this comment.
The adapter should be typed by schema, shouldn't it? Same for the ones below.
| import { Database } from "bun:sqlite"; | ||
| import { SqliteAdapter } from "@8monkey/no-orm/adapters/sqlite"; | ||
|
|
||
| const db = new Database(":memory:"); |
There was a problem hiding this comment.
Both is value-able, keep both.
There was a problem hiding this comment.
I am not sure about keeping both, memory and file seem to be the same thing, storage location.
There was a problem hiding this comment.
They are not the same thing. This is documentation, people need to understand what options they have.
|
|
||
| // --- Internal PG Syntax Helpers --- | ||
|
|
||
| const quote = (s: string) => `"${s.replaceAll('"', '""')}"`; |
There was a problem hiding this comment.
This is a regex which is called dozens of times per query. How can we avoid that? Can we use safe interfaces from the drivers that do that for us?
There was a problem hiding this comment.
Why not use driver-native identifier quoting?
While sql("identifier") in postgres.js and Bun.sql, pg.escapeIdentifier in node-postgres are valid for Postgres drivers, the three SQLite drivers (bun:sqlite, better-sqlite3, sqlite) have no quoting API.
Suggestion
For Postgres drivers that expose a native quoting — sql("identifier") in postgres.js and Bun.sql, pg.escapeIdentifier in node-postgres — we delegate quoting to the driver directly.
For SQLite drivers (bun:sqlite, better-sqlite3, sqlite), we use quote() instead. On top of this, we drop createQuotedSchema, which pre-caches quoted identifiers at construction time. That means eliminating the startup allocation and the cache lookup indirection, and simplifies the code.
There was a problem hiding this comment.
All drivers support parameterised queries. When you use parameterised queries, you don't need to quote anything manually.
There was a problem hiding this comment.
Let me try to clarity the confusion:
1/ The reason quote() exists is not SQL injection — it's reserved words. A user of no-orm might define a field named order, group, select, or from. Without quoting:
SELECT order, group FROM users WHERE from = ? -- ❌ syntax error
SELECT "order", "group" FROM users WHERE "from" = ? -- ✅
2/ Parameterised queries bind values, not identifiers. They are two separate concerns.
For SELECT * FROM users WHERE id = ?, the driver safely binds id's value via ? — but users and id are already parsed as literals before any binding happens. If those names happen to be SQL reserved words (order, from, select), the query fails at parse time regardless of parameterization.
Bun's own docs acknowledge this distinction — the Dynamic table names section explicitly provides a sql() helper specifically for identifiers, separate from value parameters:
"When you need to reference tables or schemas dynamically, use the
sql()helper to ensure proper escaping"
quote() serves the same purpose for drivers that don't provide a helper.
There was a problem hiding this comment.
Understand, but that's not what quote is doing:
const quote = (s: string) => `"${s.replaceAll('"', '""')}"`;
For these cases, you could simply quote everything, which doesn't require a regex, just a prepend / append.
There was a problem hiding this comment.
I got it, shall we proceed with this improvement and the suggestion above?
There was a problem hiding this comment.
Not clear what exactly you mean?
Move recursive WHERE logic, SET, and SORT builders to utils/sql.ts using dependency injection. This reduces duplication in SQL adapters and improves maintainability while adhering to performance guidelines by avoiding object spreads.
| const fields = model.fields; | ||
| const res: Record<string, unknown> = {}; | ||
| // eslint-disable-next-line typescript-eslint/no-unsafe-type-assertion -- select fields are strings | ||
| const keys = (select as readonly string[]) ?? Object.keys(row); |
There was a problem hiding this comment.
What's the benefit of having select at all in this method vs just using Object.keys(row)?
| * Prepares a data object for database insertion/update. | ||
| * Handles JSON stringification and optional adapter-specific mapping. | ||
| */ | ||
| export function toDbRow( |
There was a problem hiding this comment.
Naming: we don't know whether it is a row or how it is stored, generally it's just a record.
There was a problem hiding this comment.
What in here is really sql specific, or would this apply to a non-sql adapter like MongoDB as well? (asking this for the third time).
| * Maps a raw database row to the inferred model type T. | ||
| * Handles JSON parsing, boolean conversion, and numeric mapping. | ||
| */ | ||
| export function toRow<T extends Record<string, unknown>>( |
There was a problem hiding this comment.
Both toRow and toDbRow feel like code that should be driver specific, since (a) it will be hard to maintain this across many drivers (b) many drivers handle a lot of this code natively, which we are now making inefficient by adding another layer.
|
|
||
| // --- Executor factories --- | ||
|
|
||
| function createPostgresJsExecutor( |
There was a problem hiding this comment.
Feels like way to many es-lint-disables in here, you own most of these interfaces (e.g. Fragment, QueryExecutor, ...), casts should not be necessary for these.
| const { model: modelName, data, select } = args; | ||
| const model = this.schema[modelName]!; | ||
| const input = toDbRow(model, data); | ||
| const fields = Object.keys(input); | ||
| const sqlFields = fields.map((f) => quote(f)).join(", "); | ||
| const sqlSelect = select ? select.map((s) => quote(s)).join(", ") : "*"; | ||
|
|
||
| const strings = [`INSERT INTO ${quote(modelName)} (${sqlFields}) VALUES (`]; | ||
| for (let i = 1; i < fields.length; i++) strings.push(", "); | ||
| strings.push(`) RETURNING ${sqlSelect}`); | ||
| const params = fields.map((f) => input[f]); |
There was a problem hiding this comment.
Looks like 80% repetitive with the sqlite adapter, what's the right level of abstraction here? same for many of the other functions below ... Ultimately, you are creating a sql query, so that sounds like it should be part of common/sql.ts, with a few parameters where things might differ. You do that for where, for example, but not for the rest. Why not?
| strings[0] = prefix + strings[0]!; | ||
| strings[strings.length - 1] += suffix; | ||
| return { strings, params: [...fragment.params] }; | ||
| } |
There was a problem hiding this comment.
Please get rid of this wrap() stuff, it extremely clumsy and makes the code unnecessary hard to read, as well as slow. Your SQL query consists of individual parts (like select, where, limit, ...), and after building each part you simply join() them.
| const strings = [...fragments[0]!.strings]; | ||
| const params = [...fragments[0]!.params]; |
There was a problem hiding this comment.
Avoid object spreads ... you can simply use slice() here, which is native and performance optimized.
|
|
||
| // --- Internal PG Syntax Helpers --- | ||
|
|
||
| const quote = (s: string) => `"${s}"`; |
There was a problem hiding this comment.
If the only reason for a "quote" function to exist is to pass a single character around, just make the character the parameter to the functions, with a reasonable default. That makes the code way simpler to read.
| mapValue?: MapValueFn; | ||
| } | ||
|
|
||
| function whereRecursive<T>(clause: Where<T>, options: BuildOptions): Fragment { |
There was a problem hiding this comment.
Generally avoid recursion, it blows up the stack, and use loop versions instead.
| return { strings, params: [...fragment.params] }; | ||
| } | ||
|
|
||
| export type ColumnExprFn = (fieldName: string, path?: string[], value?: unknown) => Fragment; |
There was a problem hiding this comment.
Why do you need a ColumnExprFn if it is always the same across all drivers?
| this.executor = isQueryExecutor(driver) ? driver : createSqliteExecutor(driver); | ||
| } | ||
|
|
||
| private getOptions(model: Model) { |
Summary by CodeRabbit
New Features
Documentation