Skip to content

Feat/one2b action items trpc#98

Open
Prajjawalk wants to merge 5 commits intomainfrom
feat/one2b-action-items-trpc
Open

Feat/one2b action items trpc#98
Prajjawalk wants to merge 5 commits intomainfrom
feat/one2b-action-items-trpc

Conversation

@Prajjawalk
Copy link
Copy Markdown
Collaborator

@Prajjawalk Prajjawalk commented May 2, 2026

Description

Summary

Two new tRPC procedures on actionRouter for the upcoming one2b
agent integration. Foundational work — both procedures are dormant
until the companion mastra PR's gated agent calls them.

What this adds

actionRouter.bulkCreateFromTranscript (mutation)

Bulk-creates Action rows from a list of items extracted by the agent
from a meeting transcript.

  • Verifies caller's workspace membership and that the transcription
    session belongs to the same workspace
  • For each item, applies a 3-tier assignee resolution so the agent
    doesn't have to know whether the assignee has an exponential account:
    1. Email matches a workspace UserActionAssignee
    2. Email matches an existing TranscriptionSessionParticipant for
      this meeting → ActionParticipantAssignee
    3. Email is unknown → auto-create a Participant + ActionParticipantAssignee
  • Stamps each action with sourceType='meeting', sourceId=transcriptionSessionId,
    source='agent-transcript', lastUpdatedBy='AGENT' for downstream
    audit / discovery
  • Maps agent priority (HIGH/MEDIUM/LOW) → exponential's
    ("1st Priority"/"Quick"/"5th Priority")
  • Per-item try/catch so a single bad item doesn't abort the batch;
    failures returned in skipped with a reason

actionRouter.findBySource (query)

Workspace-scoped lookup by sourceType + optional sourceId. Optional
assigneeEmail filter resolves through both the User-backed
ActionAssignee path and the Participant-backed
ActionParticipantAssignee path so callers don't need to know which
assignment kind was used.

Helper — findUserByEmailInWorkspace

Added to src/server/services/access/resolvers/workspaceResolver.ts.
Looks up a User by email and verifies workspace membership in one call.
Reusable for any future agent integration that resolves humans by email.

Updated — actionRouter.update

Added optional lastUpdatedBy (enum: AGENT/USER_EMAIL/USER_WHATSAPP/USER_UI)
and lastUpdatedSource (string) inputs so agents can stamp source
attribution on updates the same way bulkCreateFromTranscript does on
creates.

Test infra

  • 6 integration tests for bulkCreateFromTranscript covering each
    assignee path, batch resilience, and authorization
  • 5 integration tests for findBySource covering filtering and auth
  • truncateAllTables now clears TranscriptionSession,
    TranscriptionSessionParticipant, ActionParticipantAssignee

Internal — query include shape

bulkCreateFromTranscript and findBySource both include: { participant: true }
on participantAssignees so callers get participant name/email rendered
without a follow-up query.

Why this is safe to merge

These procedures have no callers in the existing codebase. The mastra
side that consumes them is gated behind MASTRA_ONE2B_AGENTS_ENABLED
(off by default) so even after both PRs merge, nothing exercises this
path until cutover. See mastra/docs/one2b-cutover.md
for the full sequence.

Test plan

  • Lint + typecheck pass (npm run check)
  • New integration tests pass (npm run test:integration)
  • No existing routers / Prisma calls regressed
  • Manual sanity: in Prisma Studio, manually invoke
    bulkCreateFromTranscript against a real workspace + transcript;
    confirm Action rows + ActionAssignee/ActionParticipantAssignee
    rows created as expected

Companion PR

Mastra: https://github.com/positonic/mastra/pull/new/feat/one2b-action-items-agent

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Performance improvement
  • Code refactoring

Database Changes

  • This PR includes database schema changes (migrations)
  • NO database changes in this PR

If migrations are included:

  • Migration name(s):
  • Schema changes:
  • Coordinated with team: Yes/No

Testing

  • Tested locally
  • Tested in preview deployment
  • Added/updated tests
  • All tests passing

Checklist

  • My code follows the project's style guidelines
  • I have performed a self-review of my own code
  • I have commented my code in hard-to-understand areas
  • My changes generate no new warnings
  • Database migrations tested on test database
  • Coordinated with team on any migration conflicts

Deployment Notes

Any special deployment considerations?

Related Issues

Closes #

Summary by CodeRabbit

  • New Features

    • Bulk-create actions from transcripts with per-item isolation and automatic assignee resolution
    • Find/discover actions by source type/ID, with optional assignee and status filtering
    • Action updates now accept optional attribution fields
  • Tests

    • New unit/integration suites covering bulk-create and find-by-source flows; added mock caller helpers and isolated DB-free tests
  • Chores

    • Stricter test database safety and truncation; added a test mocking dependency; expanded testing documentation

… integration

Two new tRPC procedures on actionRouter to support the upcoming
one2b agent port. Both verify caller workspace membership before
any DB work and accept workspaceId explicitly so the caller never
relies on session defaults.

bulkCreateFromTranscript:
- Accepts a transcription session id + array of action items extracted
  from the transcript
- For each item, applies 3-tier assignee resolution:
  1) email matches a workspace User -> ActionAssignee
  2) email matches an existing TranscriptionSessionParticipant
     for this meeting -> ActionParticipantAssignee
  3) email is unknown -> auto-create a Participant + assign
  This avoids forcing every meeting attendee to onboard to exponential
  just to be assigned a task
- Stamps each action with sourceType='meeting', sourceId=transcript id,
  source='agent-transcript', lastUpdatedBy='AGENT' so the action's
  origin is queryable later
- Maps agent priority (HIGH/MEDIUM/LOW) to exponential's priority
  vocabulary; uses status='ACTIVE' (OVERDUE is derived at query time
  from dueDate, IN_PROGRESS is tracked via kanbanStatus per existing
  conventions)
- Per-item try/catch so a single bad item doesn't abort the batch;
  failures are returned in `skipped` with a reason

findBySource:
- Workspace-scoped query by sourceType + optional sourceId
- Optional assigneeEmail filter that resolves through both
  ActionAssignee (User) and ActionParticipantAssignee (Participant)
  paths so callers don't need to know which kind of assignment
  was used

Helper:
- findUserByEmailInWorkspace in workspaceResolver — lookup by email
  + WorkspaceUser membership check

Tests:
- 6 integration tests for bulkCreateFromTranscript covering each
  assignee path, batch resilience, and authorization
- 5 integration tests for findBySource covering filtering and auth

Test infra:
- truncateAllTables now clears TranscriptionSession,
  TranscriptionSessionParticipant, and ActionParticipantAssignee
  so integration tests can run against the new tables
…nt in source queries

Two corrections surfaced by the mastra-side action-items-tools port:

1. action.update zod input was missing lastUpdatedBy + lastUpdatedSource,
   so the agent couldn't stamp source attribution on updates the way
   bulkCreateFromTranscript does on creates. Added both as optional
   inputs (lastUpdatedBy is enum-restricted to AGENT/USER_EMAIL/
   USER_WHATSAPP/USER_UI to match the schema convention).

2. bulkCreateFromTranscript and findBySource both included
   participantAssignees: true without joining the participant relation,
   so participant-backed assignees came back without name/email at
   runtime. Switched to participantAssignees: { include: { participant:
   true } } in both spots so callers get the participant data they
   need to render assignee info.
@vercel
Copy link
Copy Markdown

vercel Bot commented May 2, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
exponential Ready Ready Preview, Comment May 4, 2026 11:23am

Request Review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 2, 2026

📝 Walkthrough

Walkthrough

Adds two action router procedures (bulkCreateFromTranscript, findBySource), extends action.update input with optional attribution fields, adds a workspace-user-by-email resolver, introduces unit tests for the new endpoints with a Prisma mock, tightens test DB safety and truncation, adds a mock caller helper, updates docs, and adds a dev dependency.

Action Router & Test infra

Layer / File(s) Summary
Data Shape & Imports
src/server/api/routers/action.ts
Add Prisma type import and import findUserByEmailInWorkspace for typed includes and assignee resolution.
Workspace User Resolver
src/server/services/access/resolvers/workspaceResolver.ts
Add findUserByEmailInWorkspace(email, workspaceId) returning { id, email, name } when a user exists and is a workspace member, else null.
Attribution Fields
src/server/api/routers/action.ts
Extend actionRouter.update input schema with optional `lastUpdatedBy?: "AGENT"
Core Endpoints
src/server/api/routers/action.ts
Add bulkCreateFromTranscript: validates workspace/transcript/project scope; creates one action per transcript item with per-item error isolation; resolves assignees by workspace user email → existing participant → create participant; returns { created, skipped }. Add findBySource: validates workspace, builds Prisma where filter for workspaceId, sourceType, optional sourceId, optional status, optional assigneeEmail (workspace user vs participant email), returns actions with includes and take limit.
Test Caller & Mocking
src/test/trpc-helpers.ts, src/server/api/routers/__tests__/action.test.ts
Add createMockCaller({ userId, db, ... }) for unit tests against a deep Prisma mock; new unit tests for bulkCreateFromTranscript and findBySource using vitest-mock-extended and extensive module mocks.
Integration Tests Cleanup
src/server/api/routers/__tests__/action.integration.test.ts
Removed the "restricted projects" integration test block (file now ends earlier).
Test DB Safety & Truncation
src/test/test-db.ts
startTestDatabase() now requires DATABASE_URL_TEST, blocks managed-host patterns, enforces local/test-name rules unless allowed; inserts __test_db_marker row. truncateAllTables() verifies marker, uses row-count heuristics to refuse likely-prod DBs, discovers public tables, and issues deletes within a single transaction with session_replication_role = 'replica'.
Docs / Guidance
CLAUDE.md
Clarify unit vs integration caller helpers, mandate CI-only integration tests/Testcontainers, and document test DB safety requirements.
Dev Dependency
package.json
Add vitest-mock-extended to devDependencies.
sequenceDiagram
    actor Agent
    participant Router as action.ts<br/>bulkCreateFromTranscript
    participant Workspace as workspaceResolver<br/>findUserByEmailInWorkspace
    participant DB as Prisma/Database

    Agent->>Router: bulkCreateFromTranscript(workspaceId, transcriptId, items)
    Router->>DB: Verify workspace membership & load transcript/project
    loop per transcript item
        Router->>Workspace: Resolve assignee by email
        Workspace->>DB: Query user by email and workspaceUser membership
        alt workspace user found
            Router->>DB: Create action + attach user assignee
        else workspace user not found
            Router->>DB: Query/create participant by email
            Router->>DB: Create action + attach participant assignee
        end
        alt success
            Router-->>Agent: add to created[]
        else failure
            Router-->>Agent: add to skipped[]
        end
    end
    Router-->>Agent: Return { created, skipped }
Loading
sequenceDiagram
    actor Client
    participant Router as action.ts<br/>findBySource
    participant Workspace as workspaceResolver<br/>findUserByEmailInWorkspace
    participant DB as Prisma/Database

    Client->>Router: findBySource(workspaceId, sourceType, [sourceId], [status], [assigneeEmail], [limit])
    Router->>DB: Verify workspace membership
    alt assigneeEmail provided
        Router->>Workspace: Resolve assignee by email
        Workspace->>DB: Query user & workspaceUser
        alt workspace user
            Router->>DB: Query actions filtered by user assignee + source + workspace
        else
            Router->>DB: Query actions filtered by participant email + source + workspace
        end
    else
        Router->>DB: Query actions filtered by source + workspace
    end
    DB-->>Router: Return actions (includes + ordered + limited)
    Router-->>Client: Return actions[]
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Feat/one2b agent schema #97: Introduced schema/migration changes (source/attribution fields and participant-assignee changes) that the new endpoints and tests rely on.

Poem

🐰 I hopped through transcripts, nibbling lines with glee,

Bulk-creating actions, one, two, three.
Assignees found or grown anew, neat and spry,
Actions by source now leap up high.
A rabbit’s cheer — clean DBs and passing CI!

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Feat/one2b action items trpc' is partially related to the changeset but is overly broad and uses a feature branch naming convention rather than a clear summary of the main change. Refine the title to clearly describe the main change, e.g., 'Add bulkCreateFromTranscript and findBySource procedures to actionRouter' or 'Add one2b agent integration tRPC procedures for action management'.
✅ Passed checks (3 passed)
Check name Status Explanation
Description check ✅ Passed The pull request description is comprehensive and covers the summary, feature details, test coverage, and deployment safety; however, several test checklist items remain unchecked and the deployment notes section is left blank.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/one2b-action-items-trpc

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 60 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/api/routers/action.ts`:
- Around line 2317-2329: The inline workspace membership check using
ctx.db.workspaceUser.findUnique (and throwing TRPCError) incorrectly excludes
team-based access; replace this manual check with the centralized access
resolver from src/server/services/access (e.g., call the workspace access
helper/resolver used elsewhere) so it considers both direct and team
memberships, remove the duplicate logic around workspaceUser.findUnique in the
action router, and ensure the resolver's boolean or thrown error is used
consistently (preserving the FORBIDDEN behavior) in the same locations where
ctx.db.workspaceUser.findUnique and TRPCError are currently referenced.
- Around line 2394-2469: Wrap the entire per-item write flow (the calls starting
with ctx.db.action.create and including findUserByEmailInWorkspace lookup,
ctx.db.actionAssignee.create, ctx.db.transcriptionSessionParticipant.create,
ctx.db.actionParticipantAssignee.create, and the final
ctx.db.action.findUniqueOrThrow hydration) in an item-scoped database
transaction so any failure rolls back the action row and related writes; use the
transaction API (e.g., ctx.db.$transaction or equivalent) to obtain a
transactional client (tx) and replace ctx.db calls inside the transaction with
tx, commit only on success, and only push the hydrated result into created after
the transaction completes, otherwise push the error into skipped.
- Around line 2293-2544: The router’s bulkCreateFromTranscript logic (including
assignee resolution, participant creation, provenance/source mapping and
per-item persistence) should be moved into a dedicated service (e.g.,
ActionService.bulkCreateFromTranscript) and the router should delegate to that
service; implement a service function that accepts the ctx (or db + session),
input (transcriptionSessionId, workspaceId, projectId, items) and returns
{created, skipped}, encapsulating the current logic around mapPriority,
findUserByEmailInWorkspace lookups, transcriptionSessionParticipant.create,
action.create, actionAssignee.create, actionParticipantAssignee.create, and the
includeShape hydration; also extract the source/query construction used in
findBySource into a service helper (e.g., ActionService.buildSourceQuery) so
findBySource delegates to it and only handles input validation and membership
checks; ensure the router still verifies workspace membership (as currently
done) and pass necessary context into the new service functions to preserve
transactions/partial failures and result shapes.
- Around line 2515-2527: When input.assigneeEmail is present, include both
assignee paths instead of choosing one: call
findUserByEmailInWorkspace(input.assigneeEmail, input.workspaceId) to get the
workspace user (if any) and then build the query so it matches either
ActionAssignee (where.assignees.some.userId === user.id) OR participantAssignees
(where.participantAssignees.some.participant.email === input.assigneeEmail); if
no workspace user exists still include the participantAssignees condition. Use
an OR clause on the where object (combining the assignees condition and the
participantAssignees condition) so both paths are searched when
input.assigneeEmail is provided.

In `@src/server/services/access/resolvers/workspaceResolver.ts`:
- Around line 93-117: The current findUserByEmailInWorkspace only checks
db.workspaceUser and so misses users who access the workspace via team
membership; update findUserByEmailInWorkspace to treat team-based access as
valid by calling or reusing the existing getWorkspaceMembership(user.id,
workspaceId) (from the same file) instead of only checking
db.workspaceUser.findUnique with userId_workspaceId; if getWorkspaceMembership
returns a membership, return the user object as before, otherwise return null.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a057f101-a72d-44c8-8b71-f9d904bd5d2b

📥 Commits

Reviewing files that changed from the base of the PR and between d395d59 and ec08114.

📒 Files selected for processing (4)
  • src/server/api/routers/__tests__/action.integration.test.ts
  • src/server/api/routers/action.ts
  • src/server/services/access/resolvers/workspaceResolver.ts
  • src/test/test-db.ts

Comment on lines +2293 to +2544
bulkCreateFromTranscript: protectedProcedure
.input(
z.object({
transcriptionSessionId: z.string(),
workspaceId: z.string(),
projectId: z.string().nullable().optional(),
items: z
.array(
z.object({
description: z.string().min(1),
assigneeEmail: z.string().email().optional(),
assigneeName: z.string().optional(),
priority: z.enum(["HIGH", "MEDIUM", "LOW"]).default("MEDIUM"),
dueDate: z.string().datetime().optional(),
category: z.string().optional(),
rawText: z.string().optional(),
}),
)
.min(1),
}),
)
.mutation(async ({ ctx, input }) => {
const userId = ctx.session.user.id;

// 1. Verify caller workspace membership.
const membership = await ctx.db.workspaceUser.findUnique({
where: {
userId_workspaceId: { userId, workspaceId: input.workspaceId },
},
select: { userId: true },
});
if (!membership) {
throw new TRPCError({
code: "FORBIDDEN",
message: "You are not a member of this workspace",
});
}

// 2. Verify the transcript exists and belongs to this workspace.
const transcript = await ctx.db.transcriptionSession.findUnique({
where: { id: input.transcriptionSessionId },
select: { id: true, workspaceId: true },
});
if (!transcript) {
throw new TRPCError({
code: "NOT_FOUND",
message: "Transcription session not found",
});
}
if (transcript.workspaceId !== input.workspaceId) {
throw new TRPCError({
code: "FORBIDDEN",
message: "Transcription session does not belong to this workspace",
});
}

// 3. Resolve effective projectId. Verify ownership if provided.
const resolvedProjectId: string | null = input.projectId ?? null;
if (resolvedProjectId) {
const project = await ctx.db.project.findUnique({
where: { id: resolvedProjectId },
select: { id: true, workspaceId: true },
});
if (!project || project.workspaceId !== input.workspaceId) {
throw new TRPCError({
code: "FORBIDDEN",
message: "Project does not belong to this workspace",
});
}
}

// 4. Map agent priority strings to internal priority values.
const mapPriority = (p: "HIGH" | "MEDIUM" | "LOW"): string => {
if (p === "HIGH") return "1st Priority";
if (p === "LOW") return "5th Priority";
return "Quick";
};

const includeShape = {
project: true,
transcriptionSession: { select: { id: true, title: true } },
assignees: {
include: {
user: { select: { id: true, name: true, email: true, image: true } },
},
},
participantAssignees: { include: { participant: true } },
} satisfies Prisma.ActionInclude;

type CreatedAction = Prisma.ActionGetPayload<{ include: typeof includeShape }>;
const created: CreatedAction[] = [];
const skipped: { rawText: string | undefined; reason: string }[] = [];

// 5. Process each item with a per-item try/catch so one failure
// doesn't abort the whole batch. We deliberately do NOT wrap the
// loop in an outer transaction — each item is logically independent
// and we want partial successes to persist.
for (const item of input.items) {
try {
const dueDate = item.dueDate ? new Date(item.dueDate) : null;

const action = await ctx.db.action.create({
data: {
name: item.description,
description: item.rawText ?? null,
dueDate,
priority: mapPriority(item.priority),
status: "ACTIVE",
workspaceId: input.workspaceId,
projectId: resolvedProjectId,
transcriptionSessionId: input.transcriptionSessionId,
createdById: userId,
source: "agent-transcript",
sourceType: "meeting",
sourceId: input.transcriptionSessionId,
lastUpdatedBy: "AGENT",
lastUpdatedSource: "agent-action-items-tool",
},
});

// 5b. Resolve assignee using 3-tier strategy.
if (item.assigneeEmail) {
const workspaceUser = await findUserByEmailInWorkspace(
item.assigneeEmail,
input.workspaceId,
);

if (workspaceUser) {
await ctx.db.actionAssignee.create({
data: { actionId: action.id, userId: workspaceUser.id },
});
} else {
const existingParticipant =
await ctx.db.transcriptionSessionParticipant.findUnique({
where: {
transcriptionSessionId_email: {
transcriptionSessionId: input.transcriptionSessionId,
email: item.assigneeEmail,
},
},
select: { id: true },
});

const participantId =
existingParticipant?.id ??
(
await ctx.db.transcriptionSessionParticipant.create({
data: {
transcriptionSessionId: input.transcriptionSessionId,
workspaceId: input.workspaceId,
email: item.assigneeEmail,
name: item.assigneeName ?? null,
},
select: { id: true },
})
).id;

await ctx.db.actionParticipantAssignee.create({
data: {
actionId: action.id,
participantId,
workspaceId: input.workspaceId,
},
});
}
}

// Reload with the include shape so the returned object is consistent.
const hydrated = await ctx.db.action.findUniqueOrThrow({
where: { id: action.id },
include: includeShape,
});
created.push(hydrated);
} catch (err) {
const reason =
err instanceof Error ? err.message : "Unknown error creating action";
skipped.push({ rawText: item.rawText, reason });
}
}

return { created, skipped };
}),

// Look up actions by their (sourceType, sourceId) provenance, optionally
// filtered by assignee email and status. Used by the one2b agent to
// surface actions tied to a meeting/email/etc.
findBySource: protectedProcedure
.input(
z.object({
workspaceId: z.string(),
sourceType: z.string(),
sourceId: z.string().optional(),
assigneeEmail: z.string().email().optional(),
status: z.string().optional(),
limit: z.number().min(1).max(100).default(20),
}),
)
.query(async ({ ctx, input }) => {
const userId = ctx.session.user.id;

// Verify caller workspace membership.
const membership = await ctx.db.workspaceUser.findUnique({
where: {
userId_workspaceId: { userId, workspaceId: input.workspaceId },
},
select: { userId: true },
});
if (!membership) {
throw new TRPCError({
code: "FORBIDDEN",
message: "You are not a member of this workspace",
});
}

// eslint-disable-next-line @typescript-eslint/no-explicit-any
const where: any = {
workspaceId: input.workspaceId,
sourceType: input.sourceType,
};
if (input.sourceId) where.sourceId = input.sourceId;
if (input.status) where.status = input.status;

if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
if (user) {
where.assignees = { some: { userId: user.id } };
} else {
where.participantAssignees = {
some: { participant: { email: input.assigneeEmail } },
};
}
}

return ctx.db.action.findMany({
where,
include: {
project: true,
transcriptionSession: { select: { id: true, title: true } },
assignees: {
include: {
user: { select: { id: true, name: true, email: true, image: true } },
},
},
participantAssignees: { include: { participant: true } },
},
orderBy: { createdAt: "desc" },
take: input.limit,
});
}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major | 🏗️ Heavy lift

Move this transcript-assignment flow into a service.

The router now owns assignee resolution, participant creation, provenance mapping, and source-query construction. Pulling that workflow into a service would make the access rules reusable, make the per-item transaction easier to implement cleanly, and keep the tRPC layer focused on input/output concerns.

As per coding guidelines, "All API logic must use tRPC for end-to-end type safety. Implement business logic in the services layer, not in routers."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/action.ts` around lines 2293 - 2544, The router’s
bulkCreateFromTranscript logic (including assignee resolution, participant
creation, provenance/source mapping and per-item persistence) should be moved
into a dedicated service (e.g., ActionService.bulkCreateFromTranscript) and the
router should delegate to that service; implement a service function that
accepts the ctx (or db + session), input (transcriptionSessionId, workspaceId,
projectId, items) and returns {created, skipped}, encapsulating the current
logic around mapPriority, findUserByEmailInWorkspace lookups,
transcriptionSessionParticipant.create, action.create, actionAssignee.create,
actionParticipantAssignee.create, and the includeShape hydration; also extract
the source/query construction used in findBySource into a service helper (e.g.,
ActionService.buildSourceQuery) so findBySource delegates to it and only handles
input validation and membership checks; ensure the router still verifies
workspace membership (as currently done) and pass necessary context into the new
service functions to preserve transactions/partial failures and result shapes.

Comment thread src/server/api/routers/action.ts
Comment on lines +2394 to +2469
const action = await ctx.db.action.create({
data: {
name: item.description,
description: item.rawText ?? null,
dueDate,
priority: mapPriority(item.priority),
status: "ACTIVE",
workspaceId: input.workspaceId,
projectId: resolvedProjectId,
transcriptionSessionId: input.transcriptionSessionId,
createdById: userId,
source: "agent-transcript",
sourceType: "meeting",
sourceId: input.transcriptionSessionId,
lastUpdatedBy: "AGENT",
lastUpdatedSource: "agent-action-items-tool",
},
});

// 5b. Resolve assignee using 3-tier strategy.
if (item.assigneeEmail) {
const workspaceUser = await findUserByEmailInWorkspace(
item.assigneeEmail,
input.workspaceId,
);

if (workspaceUser) {
await ctx.db.actionAssignee.create({
data: { actionId: action.id, userId: workspaceUser.id },
});
} else {
const existingParticipant =
await ctx.db.transcriptionSessionParticipant.findUnique({
where: {
transcriptionSessionId_email: {
transcriptionSessionId: input.transcriptionSessionId,
email: item.assigneeEmail,
},
},
select: { id: true },
});

const participantId =
existingParticipant?.id ??
(
await ctx.db.transcriptionSessionParticipant.create({
data: {
transcriptionSessionId: input.transcriptionSessionId,
workspaceId: input.workspaceId,
email: item.assigneeEmail,
name: item.assigneeName ?? null,
},
select: { id: true },
})
).id;

await ctx.db.actionParticipantAssignee.create({
data: {
actionId: action.id,
participantId,
workspaceId: input.workspaceId,
},
});
}
}

// Reload with the include shape so the returned object is consistent.
const hydrated = await ctx.db.action.findUniqueOrThrow({
where: { id: action.id },
include: includeShape,
});
created.push(hydrated);
} catch (err) {
const reason =
err instanceof Error ? err.message : "Unknown error creating action";
skipped.push({ rawText: item.rawText, reason });
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Make each transcript item atomic.

If anything fails after action.create() succeeds—participant creation, assignee linking, or the hydration read—the item is pushed into skipped even though the action row has already been persisted. That leaves partially applied data behind and makes the created/skipped result inaccurate. Please wrap the per-item write path in an item-scoped transaction.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/action.ts` around lines 2394 - 2469, Wrap the entire
per-item write flow (the calls starting with ctx.db.action.create and including
findUserByEmailInWorkspace lookup, ctx.db.actionAssignee.create,
ctx.db.transcriptionSessionParticipant.create,
ctx.db.actionParticipantAssignee.create, and the final
ctx.db.action.findUniqueOrThrow hydration) in an item-scoped database
transaction so any failure rolls back the action row and related writes; use the
transaction API (e.g., ctx.db.$transaction or equivalent) to obtain a
transactional client (tx) and replace ctx.db calls inside the transaction with
tx, commit only on success, and only push the hydrated result into created after
the transaction completes, otherwise push the error into skipped.

Comment on lines +2515 to +2527
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
if (user) {
where.assignees = { some: { userId: user.id } };
} else {
where.participantAssignees = {
some: { participant: { email: input.assigneeEmail } },
};
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Search both assignee paths for the same email.

Right now this picks ActionAssignee when the email belongs to a workspace user and skips the participantAssignees path entirely. That means participant-backed actions disappear whenever the same email also exists on a workspace user, which is the overlap this API needs to handle.

Suggested fix
       if (input.assigneeEmail) {
         const user = await findUserByEmailInWorkspace(
           input.assigneeEmail,
           input.workspaceId,
         );
-        if (user) {
-          where.assignees = { some: { userId: user.id } };
-        } else {
-          where.participantAssignees = {
-            some: { participant: { email: input.assigneeEmail } },
-          };
-        }
+        where.OR = [
+          ...(user ? [{ assignees: { some: { userId: user.id } } }] : []),
+          {
+            participantAssignees: {
+              some: { participant: { email: input.assigneeEmail } },
+            },
+          },
+        ];
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
if (user) {
where.assignees = { some: { userId: user.id } };
} else {
where.participantAssignees = {
some: { participant: { email: input.assigneeEmail } },
};
}
}
if (input.assigneeEmail) {
const user = await findUserByEmailInWorkspace(
input.assigneeEmail,
input.workspaceId,
);
where.OR = [
...(user ? [{ assignees: { some: { userId: user.id } } }] : []),
{
participantAssignees: {
some: { participant: { email: input.assigneeEmail } },
},
},
];
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/action.ts` around lines 2515 - 2527, When
input.assigneeEmail is present, include both assignee paths instead of choosing
one: call findUserByEmailInWorkspace(input.assigneeEmail, input.workspaceId) to
get the workspace user (if any) and then build the query so it matches either
ActionAssignee (where.assignees.some.userId === user.id) OR participantAssignees
(where.participantAssignees.some.participant.email === input.assigneeEmail); if
no workspace user exists still include the participantAssignees condition. Use
an OR clause on the where object (combining the assignees condition and the
participantAssignees condition) so both paths are searched when
input.assigneeEmail is provided.

Comment on lines +93 to +117
export async function findUserByEmailInWorkspace(
email: string,
workspaceId: string,
): Promise<{ id: string; email: string; name: string | null } | null> {
const user = await db.user.findUnique({
where: { email },
select: { id: true, email: true, name: true },
});

if (!user?.email) {
return null;
}

const membership = await db.workspaceUser.findUnique({
where: {
userId_workspaceId: { userId: user.id, workspaceId },
},
select: { userId: true },
});

if (!membership) {
return null;
}

return { id: user.id, email: user.email, name: user.name };
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Honor team-based workspace access in this email lookup.

This helper only checks WorkspaceUser, but getWorkspaceMembership() in the same file treats team membership as a valid workspace access path too. A user who only reaches the workspace through a team will resolve as null here, so transcript assignee resolution and findBySource({ assigneeEmail }) can miss legitimate users.

Suggested fix
 export async function findUserByEmailInWorkspace(
   email: string,
   workspaceId: string,
 ): Promise<{ id: string; email: string; name: string | null } | null> {
   const user = await db.user.findUnique({
     where: { email },
     select: { id: true, email: true, name: true },
   });

   if (!user?.email) {
     return null;
   }

-  const membership = await db.workspaceUser.findUnique({
-    where: {
-      userId_workspaceId: { userId: user.id, workspaceId },
-    },
-    select: { userId: true },
-  });
+  const membership = await getWorkspaceMembership(db, user.id, workspaceId);

   if (!membership) {
     return null;
   }

   return { id: user.id, email: user.email, name: user.name };
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/services/access/resolvers/workspaceResolver.ts` around lines 93 -
117, The current findUserByEmailInWorkspace only checks db.workspaceUser and so
misses users who access the workspace via team membership; update
findUserByEmailInWorkspace to treat team-based access as valid by calling or
reusing the existing getWorkspaceMembership(user.id, workspaceId) (from the same
file) instead of only checking db.workspaceUser.findUnique with
userId_workspaceId; if getWorkspaceMembership returns a membership, return the
user object as before, otherwise return null.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
src/server/api/routers/__tests__/action.integration.test.ts (1)

543-901: ⚡ Quick win

Add one regression test for action.update attribution.

This PR also extends action.update with lastUpdatedBy / lastUpdatedSource, but the new coverage only exercises the two new agent procedures. A small integration test here would keep that new attribution path from landing unverified.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/__tests__/action.integration.test.ts` around lines 543
- 901, Add an integration test that verifies action.update sets the new
attribution fields: create a user and workspace, create an action via
db.action.create, call caller.action.update (using createTestCaller) to change
the action, then load the action (via db.action.findUnique or the update result)
and assert lastUpdatedBy and lastUpdatedSource match the expected values (e.g.,
"AGENT" and the agent source constant); reference functions/classes:
action.update procedure, createTestCaller, db.action.create,
db.action.findUnique to locate where to add the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/server/api/routers/__tests__/action.integration.test.ts`:
- Around line 671-706: Update the two tests that currently use
.rejects.toThrow(TRPCError) (the "rejects unauthorized workspace" and "rejects
mismatched transcript workspace" cases and the similar assertions around lines
888-899) to assert the specific tRPC error code instead of any TRPCError; after
calling createTestCaller(...).action.bulkCreateFromTranscript(...), use
.rejects.toMatchObject({ code: "FORBIDDEN" }) (or the appropriate code your
server sends) so the test verifies access-control failure precisely rather than
any server error; update both test cases to expect the exact error code.

In `@src/test/test-db.ts`:
- Around line 26-31: The current guard allows non-loopback DB URLs if the
database name contains "test"; instead require explicit opt-in via
ALLOW_NON_LOCAL_TEST_DB=1 for any non-loopback host. Update the host-check logic
in src/test/test-db.ts (the function/area that decides allowed DBs — look for
the code handling "localhost / 127.0.0.1 / ::1", "host.docker.internal", and the
database-name "test" check and the call site for truncateAllTables()) so that:
1) loopback hosts (localhost/127.0.0.1/::1) remain allowed; 2) any non-loopback
host (including host.docker.internal or hosts with DB names containing "test")
must only be allowed when process.env.ALLOW_NON_LOCAL_TEST_DB === "1"; and 3)
remove or disable the database-name contains "test" bypass; adjust error/log
message to clearly state the required ALLOW_NON_LOCAL_TEST_DB env var when
rejecting a non-loopback DB.
- Around line 118-120: The afterEach() cleanup must implement the documented
fallback when setting session_replication_role fails: wrap the current
transaction block that runs "SET LOCAL session_replication_role = 'replica'" and
the bulk truncate/delete in a try-catch inside afterEach(); on catching the
permission/privilege error from client.query (or the transaction helper used),
fall back to iterating the table list (the same tables array used in the
cleanup) and perform per-table DELETEs within individual statements, retrying
each DELETE when it fails with a foreign-key/constraint error until it succeeds
(or until a small retry limit is hit) so that dependent rows are removed in
successive passes; ensure you still run the same BEGIN/COMMIT/ROLLBACK around
the primary approach and only use the per-statement retry loop when the SET
LOCAL session_replication_role = 'replica' privilege is denied.

---

Nitpick comments:
In `@src/server/api/routers/__tests__/action.integration.test.ts`:
- Around line 543-901: Add an integration test that verifies action.update sets
the new attribution fields: create a user and workspace, create an action via
db.action.create, call caller.action.update (using createTestCaller) to change
the action, then load the action (via db.action.findUnique or the update result)
and assert lastUpdatedBy and lastUpdatedSource match the expected values (e.g.,
"AGENT" and the agent source constant); reference functions/classes:
action.update procedure, createTestCaller, db.action.create,
db.action.findUnique to locate where to add the test.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e3a84fcb-162b-49c3-ae21-af2fe04075e1

📥 Commits

Reviewing files that changed from the base of the PR and between ec08114 and ffdbfb5.

📒 Files selected for processing (2)
  • src/server/api/routers/__tests__/action.integration.test.ts
  • src/test/test-db.ts

Comment on lines +671 to +706
it("rejects unauthorized workspace", async () => {
const { ws, session } = await setupTranscript();
const stranger = await createUser(db);

const caller = createTestCaller(stranger.id);
await expect(
caller.action.bulkCreateFromTranscript({
transcriptionSessionId: session.id,
workspaceId: ws.id,
items: [{ description: "Nope", priority: "MEDIUM" }],
}),
).rejects.toThrow(TRPCError);
});

it("rejects mismatched transcript workspace", async () => {
const owner = await createUser(db);
const ws1 = await createWorkspace(db, { ownerId: owner.id });
const ws2 = await createWorkspace(db, { ownerId: owner.id });
const session = await db.transcriptionSession.create({
data: {
sessionId: `sess-${Math.random().toString(36).slice(2, 10)}`,
workspaceId: ws2.id,
userId: owner.id,
title: "Other workspace",
},
});

const caller = createTestCaller(owner.id);
await expect(
caller.action.bulkCreateFromTranscript({
transcriptionSessionId: session.id,
workspaceId: ws1.id,
items: [{ description: "Nope", priority: "MEDIUM" }],
}),
).rejects.toThrow(TRPCError);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Assert the specific tRPC error codes here.

.rejects.toThrow(TRPCError) will pass for any server-side tRPC failure, including unrelated validation or NOT_FOUND paths. These tests are meant to pin access-control behavior, so they should verify the expected code too.

Also applies to: 888-899

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/server/api/routers/__tests__/action.integration.test.ts` around lines 671
- 706, Update the two tests that currently use .rejects.toThrow(TRPCError) (the
"rejects unauthorized workspace" and "rejects mismatched transcript workspace"
cases and the similar assertions around lines 888-899) to assert the specific
tRPC error code instead of any TRPCError; after calling
createTestCaller(...).action.bulkCreateFromTranscript(...), use
.rejects.toMatchObject({ code: "FORBIDDEN" }) (or the appropriate code your
server sends) so the test verifies access-control failure precisely rather than
any server error; update both test cases to expect the exact error code.

Comment thread src/test/test-db.ts Outdated
Comment on lines +26 to +31
// Allow:
// - localhost / 127.0.0.1 / ::1 hosts
// - testcontainers (host.docker.internal, dynamic ports)
// - URLs whose database name contains "test" (e.g. exponential_test)
// - explicit opt-in via ALLOW_NON_LOCAL_TEST_DB=1 (escape hatch for
// deliberate scenarios; never set this in normal dev)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical | ⚡ Quick win

Require explicit opt-in for any non-loopback database.

Allowing a remote URL solely because the database name contains test still lets truncateAllTables() wipe shared QA/staging data. This guard only becomes reliable if non-local hosts always require ALLOW_NON_LOCAL_TEST_DB=1.

Safer guard
-    const dbNameLooksLikeTest = /test/i.test(dbName);
     const explicitOptIn = process.env.ALLOW_NON_LOCAL_TEST_DB === "1";

-    if (!isLocalhost && !dbNameLooksLikeTest && !explicitOptIn) {
+    if (!isLocalhost && !explicitOptIn) {

Also applies to: 40-52

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test/test-db.ts` around lines 26 - 31, The current guard allows
non-loopback DB URLs if the database name contains "test"; instead require
explicit opt-in via ALLOW_NON_LOCAL_TEST_DB=1 for any non-loopback host. Update
the host-check logic in src/test/test-db.ts (the function/area that decides
allowed DBs — look for the code handling "localhost / 127.0.0.1 / ::1",
"host.docker.internal", and the database-name "test" check and the call site for
truncateAllTables()) so that: 1) loopback hosts (localhost/127.0.0.1/::1) remain
allowed; 2) any non-loopback host (including host.docker.internal or hosts with
DB names containing "test") must only be allowed when
process.env.ALLOW_NON_LOCAL_TEST_DB === "1"; and 3) remove or disable the
database-name contains "test" bypass; adjust error/log message to clearly state
the required ALLOW_NON_LOCAL_TEST_DB env var when rejecting a non-loopback DB.

Comment thread src/test/test-db.ts
Comment on lines +118 to +120
* - CON: `session_replication_role = replica` requires SUPERUSER or
* REPLICATION privileges. If that fails, fall back to a per-statement loop
* that retries until no FK errors remain.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In PostgreSQL 16, does SET LOCAL session_replication_role = 'replica' require superuser privileges, and what error should non-superusers expect?

💡 Result:

In PostgreSQL 16, SET LOCAL session_replication_role = 'replica' requires either superuser privileges or the appropriate SET privilege on the session_replication_role parameter, which can be granted via GRANT SET ON PARAMETER session_replication_role TO user;. Non-superusers without this privilege will receive the error: "permission denied to set parameter "session_replication_role"".

Citations:


🏁 Script executed:

fd -t f "test-db.ts" | head -5

Repository: positonic/exponential

Length of output: 85


🏁 Script executed:

cat -n src/test/test-db.ts | sed -n '100,170p'

Repository: positonic/exponential

Length of output: 3476


🏁 Script executed:

# Search for error handling around session_replication_role in the file
rg -n "session_replication_role|catch.*error" src/test/test-db.ts -A 2 -B 2

Repository: positonic/exponential

Length of output: 1193


🏁 Script executed:

# Check if deleteAllTablesWithFkAwareRetries function exists
rg -n "deleteAllTablesWithFkAwareRetries" src/test/test-db.ts

Repository: positonic/exponential

Length of output: 47


🏁 Script executed:

# Search entire codebase for deleteAllTablesWithFkAwareRetries
rg "deleteAllTablesWithFkAwareRetries" --type ts --type tsx

Repository: positonic/exponential

Length of output: 93


🏁 Script executed:

# Search for any error handling around session_replication_role in codebase
rg "session_replication_role" -A 5 -B 5

Repository: positonic/exponential

Length of output: 2891


🏁 Script executed:

# Check if there's any catch block related to FK or replication role elsewhere
rg "catch.*error|catch\s*\(" src/test/ --type ts --type tsx -B 2 -A 3

Repository: positonic/exponential

Length of output: 93


🏁 Script executed:

# Search for catch blocks in test-db.ts
rg "catch" src/test/test-db.ts -B 2 -A 3

Repository: positonic/exponential

Length of output: 47


🏁 Script executed:

# Get file type information
fd -t f "test-db.ts" -x file {}

Repository: positonic/exponential

Length of output: 101


🏁 Script executed:

# Check entire test directory for error handling patterns
rg "catch.*error" src/test/ -B 2 -A 3

Repository: positonic/exponential

Length of output: 47


🏁 Script executed:

# Read the full function to understand the complete picture
cat -n src/test/test-db.ts | sed -n '122,157p'

Repository: positonic/exponential

Length of output: 1732


🏁 Script executed:

# Check if there are any similar fallback/retry functions in the codebase
rg "retry|fallback|FK.*error" src/ -i

Repository: positonic/exponential

Length of output: 19510


Implement the documented fallback when session_replication_role cannot be set.

The docstring (lines 118-120) explicitly documents a fallback: "If that fails, fall back to a per-statement loop that retries until no FK errors remain." However, the implementation has no error handling for this scenario. On managed Postgres where non-superusers cannot execute SET LOCAL session_replication_role = 'replica', the entire afterEach() cleanup fails instead of falling back. Wrap the transaction in try-catch and handle the privilege error by falling back to individual deletes with retry logic:

Fallback sketch
-  await db.$transaction(statements, {
-    // Generous timeout for remote test DBs where ~150 round-trips can be slow.
-    timeout: 60_000,
-    maxWait: 10_000,
-  });
+  try {
+    await db.$transaction(statements, {
+      // Generous timeout for remote test DBs where ~150 round-trips can be slow.
+      timeout: 60_000,
+      maxWait: 10_000,
+    });
+  } catch (error) {
+    if (!isSessionReplicationRolePrivilegeError(error)) throw error;
+    await deleteAllTablesWithFkAwareRetries(db, quotedTables);
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test/test-db.ts` around lines 118 - 120, The afterEach() cleanup must
implement the documented fallback when setting session_replication_role fails:
wrap the current transaction block that runs "SET LOCAL session_replication_role
= 'replica'" and the bulk truncate/delete in a try-catch inside afterEach(); on
catching the permission/privilege error from client.query (or the transaction
helper used), fall back to iterating the table list (the same tables array used
in the cleanup) and perform per-table DELETEs within individual statements,
retrying each DELETE when it fails with a foreign-key/constraint error until it
succeeds (or until a small retry limit is hit) so that dependent rows are
removed in successive passes; ensure you still run the same
BEGIN/COMMIT/ROLLBACK around the primary approach and only use the per-statement
retry loop when the SET LOCAL session_replication_role = 'replica' privilege is
denied.

…nst the prod-wipe class of bug

The integration test suite previously fell back to process.env.DATABASE_URL
when DATABASE_URL_TEST was unset, which directly caused a production wipe
when those tests ran against an .env that pointed at production. The
dynamic truncate (with FK suppression for test cleanup) then DELETEd from
every public table.

Two changes to prevent any recurrence:

1. The action router tests for bulkCreateFromTranscript and findBySource
   are rewritten as a pure mocked unit test
   (action.test.ts) using vitest-mock-extended's mockDeep<PrismaClient>.
   No DB is touched. The previously-todo'd "skips item that throws" case
   is now properly tested via mockRejectedValueOnce. 11/11 tests pass in
   ~2 seconds.

   The original integration tests for the OTHER action router procedures
   (create, update, getProjectActions, etc.) are preserved unchanged in
   action.integration.test.ts and remain CI-only — they test cascade
   behavior, FK constraints, and other DB semantics that mocks can't
   cover. Per the new policy they only run in CI with a testcontainer.

2. test-db.ts has four layers of defense against ever touching a real DB:

   a. Removed the "?? process.env.DATABASE_URL" fallback that was the
      direct cause of the wipe. Now requires DATABASE_URL_TEST or a
      testcontainer; no silent fallback.
   b. Hostname blocklist for managed-service patterns
      (rlwy.net, railway.app, supabase, neon.tech, amazonaws.com,
      azure.com, gcp.cloud, fly.dev, digitalocean, aiven.io).
      Non-overridable.
   c. Pre-truncate row count check — refuses to truncate if User > 100
      or KnowledgeChunk > 1000. Prod DBs trip this; test DBs don't.
   d. Marker table __test_db_marker, written at startTestDatabase and
      verified before every truncate. Real prod DBs never have this.

Plus:
- Added createMockCaller helper in trpc-helpers.ts alongside the existing
  createTestCaller (the latter stays for the rare CI-only integration
  tests that genuinely need a real DB).
- CLAUDE.md updated with explicit test database safety policy.

vitest-mock-extended added as a dev dependency.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
src/test/test-db.ts (2)

88-106: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Layer 2 guard still allows non-local DB if name contains "test".

The guard at lines 93-106 permits remote databases solely because the database name contains test. This still enables truncateAllTables() to wipe shared QA/staging data on managed hosts that passed the Layer 1 blocklist but have a database named something like myapp_test.

Consider requiring explicit opt-in (ALLOW_NON_LOCAL_TEST_DB=1) for any non-localhost host, removing the dbNameLooksLikeTest bypass:

Safer guard
-    const dbNameLooksLikeTest = /test/i.test(dbName);
     const explicitOptIn = process.env.ALLOW_NON_LOCAL_TEST_DB === "1";

-    if (!isLocalhost && !dbNameLooksLikeTest && !explicitOptIn) {
+    if (!isLocalhost && !explicitOptIn) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test/test-db.ts` around lines 88 - 106, The current Layer 2 guard in
src/test/test-db.ts uses dbNameLooksLikeTest to allow non-local hosts if the DB
name contains "test"; change this logic so any non-localhost host requires
explicit opt-in via ALLOW_NON_LOCAL_TEST_DB (i.e., remove the
dbNameLooksLikeTest bypass) by updating the conditional that checks isLocalhost,
dbNameLooksLikeTest, and explicitOptIn: only allow non-local hosts when
explicitOptIn is true; keep the same sanitized existingUrl error message and
throw path (using existingUrl and sanitized) and ensure the behavior still
prevents truncateAllTables() from running against remote databases unless
explicitOptIn is set.

276-286: ⚠️ Potential issue | 🟠 Major | 🏗️ Heavy lift

Missing fallback when session_replication_role cannot be set.

The docstring (lines 182-184) documents a fallback strategy for when SET LOCAL session_replication_role = 'replica' fails due to insufficient privileges, but the implementation has no try-catch around the transaction. On managed Postgres where non-superusers lack this privilege, cleanup will fail entirely instead of falling back to FK-aware retries.

Fallback sketch
+  try {
     await db.$transaction(statements, {
       timeout: 60_000,
       maxWait: 10_000,
     });
+  } catch (error) {
+    // Check if this is a permission denied error for session_replication_role
+    const msg = error instanceof Error ? error.message : String(error);
+    if (msg.includes("session_replication_role") || msg.includes("permission denied")) {
+      // Fall back to individual deletes with FK-aware retry logic
+      await deleteWithFkRetries(db, quotedTables);
+    } else {
+      throw error;
+    }
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test/test-db.ts` around lines 276 - 286, The cleanup can fail on managed
Postgres because SET LOCAL session_replication_role = 'replica' may be
disallowed; wrap the db.$transaction call that executes the statements array
(which includes the SET LOCAL and the deletes) in a try/catch and implement the
documented fallback: if the transaction throws a permission error, skip the SET
LOCAL and attempt the deletes using the quotedTables list with FK-aware retries
(e.g., run each db.$executeRawUnsafe DELETE in sequence with retry/backoff
and/or dependency-aware order) instead of the single transaction; keep the same
timeout/maxWait semantics for the initial transaction attempt and ensure errors
from the fallback are surfaced if they also fail.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/test/test-db.ts`:
- Around line 88-106: The current Layer 2 guard in src/test/test-db.ts uses
dbNameLooksLikeTest to allow non-local hosts if the DB name contains "test";
change this logic so any non-localhost host requires explicit opt-in via
ALLOW_NON_LOCAL_TEST_DB (i.e., remove the dbNameLooksLikeTest bypass) by
updating the conditional that checks isLocalhost, dbNameLooksLikeTest, and
explicitOptIn: only allow non-local hosts when explicitOptIn is true; keep the
same sanitized existingUrl error message and throw path (using existingUrl and
sanitized) and ensure the behavior still prevents truncateAllTables() from
running against remote databases unless explicitOptIn is set.
- Around line 276-286: The cleanup can fail on managed Postgres because SET
LOCAL session_replication_role = 'replica' may be disallowed; wrap the
db.$transaction call that executes the statements array (which includes the SET
LOCAL and the deletes) in a try/catch and implement the documented fallback: if
the transaction throws a permission error, skip the SET LOCAL and attempt the
deletes using the quotedTables list with FK-aware retries (e.g., run each
db.$executeRawUnsafe DELETE in sequence with retry/backoff and/or
dependency-aware order) instead of the single transaction; keep the same
timeout/maxWait semantics for the initial transaction attempt and ensure errors
from the fallback are surfaced if they also fail.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ac9d72e1-a426-4ad0-a2a1-3dcb3c455b0f

📥 Commits

Reviewing files that changed from the base of the PR and between ffdbfb5 and 1ef23e9.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (7)
  • CLAUDE.md
  • package.json
  • src/server/api/routers/__tests__/action.integration.test.ts
  • src/server/api/routers/__tests__/action.test.ts
  • src/server/api/routers/action.ts
  • src/test/test-db.ts
  • src/test/trpc-helpers.ts
💤 Files with no reviewable changes (1)
  • src/server/api/routers/tests/action.integration.test.ts
✅ Files skipped from review due to trivial changes (2)
  • package.json
  • src/server/api/routers/action.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant