feat: HD Pro purchase history import + central workflow projections#93
feat: HD Pro purchase history import + central workflow projections#93chitcommit wants to merge 2 commits intomainfrom
Conversation
Fire-and-forget projection of workflow lifecycle events (create, update, approve, complete, close-tax-automation) to the central context service. Adds CHITTY_CONTEXT_BASE and CHITTY_CONTEXT_TOKEN env bindings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds POST /api/import/hdpro endpoint that ingests Home Depot Pro Purchase History CSV exports (3K+ SKU-level line items). Includes: - Job name normalization map: 24 HD Pro job name variants (including typos like COZY RRENOVATION, concatenation errors like COZY RENOCOZY RENOVATION) mapped to canonical entity keys - Department/Class → COA auto-classification: HARDWARE/PLUMBING/ELECTRICAL → 5070 (Repairs), CLEANING → 5020, APPLIANCES → 7030 (CapEx), etc. - Untagged items (33% of dataset) routed to 9010 suspense for L1→L2 review - SHA-256 dedup by transaction_id + SKU + date - Full metadata preservation (store #, purchaser, invoice, discounts) - Response includes job name mapping stats for import validation Also updates Phase 5 status in CLAUDE.md to reflect completed items. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@coderabbitai review Please evaluate:
|
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
chittyfinance | a212965 | Apr 16 2026, 05:53 AM |
📝 WalkthroughWalkthroughThe pull request introduces a central workflow projection system that mirrors local workflow lifecycle data to an external context/events API, adds HD Pro CSV import capability with job name normalization and COA classification, and integrates projection logging across workflow routes and tax automation reporting. Changes
Sequence Diagram(s)sequenceDiagram
participant LocalApp as Local App
participant CentralLog as centralWorkflowLog()
participant CentralPost as postCentralWorkflowProjection()
participant CentralAPI as Central Context API
LocalApp->>LocalApp: Create/Update/Approve/Complete Workflow
LocalApp->>CentralLog: Invoke with projection data
CentralLog->>CentralPost: Call async projection
CentralLog->>LocalApp: Schedule via executionCtx.waitUntil()
CentralPost->>CentralPost: Resolve base URL & auth token
alt Token Present
CentralPost->>CentralPost: Build event payload (workflow.projected)
CentralPost->>CentralAPI: POST /v1/events (5s timeout)
CentralAPI-->>CentralPost: 2xx or non-2xx response
CentralPost->>CentralPost: Log warnings on non-2xx
else No Token
CentralPost->>CentralPost: Return early (no-op)
end
CentralPost->>CentralLog: Promise resolves/rejects
LocalApp->>LocalApp: Continue (non-blocking)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Code Review — PR #93: HD Pro Import + Central Workflow ProjectionsOverviewThis PR adds two distinct features: (1) a Home Depot Pro Purchase History CSV importer with job name normalization and COA auto-classification, and (2) fire-and-forget workflow projection to 🔴 CriticalN+1 Query Pattern in HD Pro Import (
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: a212965309
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| for (const line of dataLines) { | ||
| const fields = parseCSVLine(line); | ||
| if (fields.length < 16) continue; |
There was a problem hiding this comment.
Enforce full HD Pro column count before parsing
parseHDProCSV only skips rows with fewer than 16 fields, but the code then dereferences through fields[20] and calls parseNum(fields[16]). If an export row has 16–20 columns (for example when trailing optional columns are omitted), parseNum receives undefined and throws on .replace, causing the whole import request to fail instead of handling the bad row gracefully. Validate for the full expected width (or guard each access) before parsing.
Useful? React with 👍 / 👎.
|
|
||
| for (const row of parsed) { | ||
| // Skip zero-amount or return-only rows | ||
| const amount = row.netUnitPrice || row.extendedRetail || row.unitPrice; |
There was a problem hiding this comment.
Derive HD Pro amount from line total, not unit price
The import amount selection uses row.netUnitPrice first, which is a per-unit number, and does not apply row.quantity. For any line with quantity greater than 1, the recorded transaction amount is understated (e.g., qty 3 at $12 is imported as $12), which directly corrupts expense totals and downstream financial reporting. Amount should be based on the line total (or quantity * netUnitPrice) rather than raw unit price.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Pull request overview
Adds ingestion + normalization support for Home Depot Pro Purchase History exports, and introduces a “fire-and-forget” projector to mirror local workflow lifecycle events into the central context.chitty.cc workflow/progress system.
Changes:
- Added
POST /api/import/hdprowith job-name normalization, department/class→COA suggestions, suspense routing, and SHA-256 deduplication. - Added
centralWorkflowLog()projector and hooked it into workflow CRUD/status routes plus close-tax-automation. - Updated environment bindings and Phase 5 documentation.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
| server/routes/workflows.ts | Emits central workflow projections on create/update/approve/complete. |
| server/routes/reports.ts | Emits a central projection for the close-tax-automation run (runId). |
| server/routes/import.ts | Implements HD Pro CSV parsing, normalization/classification helpers, and the new import endpoint. |
| server/lib/central-workflows.ts | New projector utility posting workflow events to context.chitty.cc. |
| server/env.ts | Adds optional CHITTY_CONTEXT_* bindings. |
| CLAUDE.md | Updates Phase 5 status text. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| centralWorkflowLog(c, { | ||
| aggregateId: workflow.id, | ||
| tenantId: workflow.tenantId, | ||
| workflowType: workflow.type, | ||
| title: workflow.title, |
There was a problem hiding this comment.
This endpoint calls storage.updateWorkflow(id, ...), but SystemStorage.updateWorkflow() updates by workflow ID without a tenant WHERE clause. Since this route also doesn’t validate the workflow’s tenantId against c.get('tenantId'), a caller who knows another tenant’s workflow ID could mutate it. Add an explicit tenant check before projecting/logging (or change the storage method to require tenantId).
| centralWorkflowLog(c, { | ||
| aggregateId: workflow.id, | ||
| tenantId: workflow.tenantId, | ||
| workflowType: workflow.type, | ||
| title: workflow.title, | ||
| description: workflow.description, | ||
| status: workflow.status, |
There was a problem hiding this comment.
Same tenant-scoping issue here as the other workflow mutations: updateWorkflow(id, ...) is not constrained by tenant in SystemStorage, and this route doesn’t validate the workflow belongs to c.get('tenantId') before completing it. Add a tenant ownership check (or require tenantId in the update query) to prevent cross-tenant completion by guessed IDs.
| /** | ||
| * Parse HD Pro Purchase History CSV. | ||
| * HD Pro CSVs have 6 metadata rows (company, phone, source, date range, export date, blank) | ||
| * followed by the actual header + data rows. Encoding is latin-1. |
There was a problem hiding this comment.
The comment says HD Pro exports are latin-1 encoded, but the endpoint reads the body via c.req.text() (UTF-8 decoding). If the input really is latin-1 this will corrupt non-ASCII characters; if it isn’t, the comment is misleading. Either decode arrayBuffer() with TextDecoder('latin1') or update the comment to match the actual behavior.
| * followed by the actual header + data rows. Encoding is latin-1. | |
| * followed by the actual header + data rows. | |
| * This parser expects already-decoded text; any request/body character encoding must be handled | |
| * before calling this function. |
| // Build entity→tenant mapping by loading all tenants the caller can see | ||
| const allTenants = await storage.getTenants(); | ||
| const entityTenantMap: Record<EntityKey, string | null> = { | ||
| 'aribia-mgmt': null, | ||
| 'city-studio': null, | ||
| 'apt-arlene': null, | ||
| 'cozy-castle': null, | ||
| 'lakeside-loft': null, | ||
| 'villa-vista': null, | ||
| 'personal-nick': null, | ||
| 'colombia': null, | ||
| 'surf-504': null, | ||
| 'addison': null, | ||
| 'chitty-consumables': null, | ||
| 'chitty-furnishings': null, | ||
| 'chitty-tools': null, | ||
| 'suspense': null, | ||
| }; | ||
|
|
There was a problem hiding this comment.
This route reads all active tenants via storage.getTenants() (which is not membership-scoped in SystemStorage) and builds entityTenantMap, but the map is never used afterwards. This is both unnecessary cross-tenant data access and wasted work; remove it, or switch to a user-scoped query (e.g., getUserTenants(c.get('userId'))) if you actually need tenant resolution here.
| // Build entity→tenant mapping by loading all tenants the caller can see | |
| const allTenants = await storage.getTenants(); | |
| const entityTenantMap: Record<EntityKey, string | null> = { | |
| 'aribia-mgmt': null, | |
| 'city-studio': null, | |
| 'apt-arlene': null, | |
| 'cozy-castle': null, | |
| 'lakeside-loft': null, | |
| 'villa-vista': null, | |
| 'personal-nick': null, | |
| 'colombia': null, | |
| 'surf-504': null, | |
| 'addison': null, | |
| 'chitty-consumables': null, | |
| 'chitty-furnishings': null, | |
| 'chitty-tools': null, | |
| 'suspense': null, | |
| }; |
| // Skip zero-amount or return-only rows | ||
| const amount = row.netUnitPrice || row.extendedRetail || row.unitPrice; | ||
| if (!amount && amount !== 0) continue; |
There was a problem hiding this comment.
The amount selection uses row.netUnitPrice || row.extendedRetail || row.unitPrice, which treats 0 as “missing” and can pick the wrong fallback. Also, if netUnitPrice/unitPrice are per-unit values, the current logic ignores quantity and discounts when computing the transaction amount. Prefer nullish checks (??) and compute a line total deterministically from the CSV’s intended total fields (e.g., extended totals and discount columns, or unit * quantity).
| // POST /api/import/hdpro — import Home Depot Pro Purchase History CSV | ||
| // Requires X-Account-ID header or ?accountId query param | ||
| importRoutes.post('/api/import/hdpro', async (c) => { | ||
| const storage = c.get('storage'); | ||
| const tenantId = c.get('tenantId'); | ||
|
|
There was a problem hiding this comment.
There’s good scenario coverage for /api/import/turbotenant, but the new /api/import/hdpro endpoint and its helpers (parseHDProCSV, normalizeHDJobName, classifyHDProItem, dedup hash) have no tests. Adding a minimal scenario test (happy path + dedup + missing accountId + malformed CSV) would help prevent regressions, especially given the column indexing and sign/amount conventions.
| centralWorkflowLog(c, { | ||
| aggregateId: workflow.id, | ||
| tenantId: workflow.tenantId, | ||
| workflowType: workflow.type, | ||
| title: workflow.title, | ||
| description: workflow.description, | ||
| status: workflow.status, |
There was a problem hiding this comment.
Same tenant-scoping issue here: updateWorkflow(id, ...) isn’t tenant-filtered in storage, and this route doesn’t verify workflow.tenantId === c.get('tenantId') before returning/projecting. Please enforce tenant ownership for approve actions to prevent cross-tenant mutation by ID.
|
|
||
| for (const line of dataLines) { | ||
| const fields = parseCSVLine(line); | ||
| if (fields.length < 16) continue; | ||
|
|
||
| const parseNum = (s: string) => { | ||
| const cleaned = s.replace(/[$,]/g, '').trim(); | ||
| return cleaned ? parseFloat(cleaned) : 0; | ||
| }; | ||
|
|
||
| rows.push({ | ||
| date: fields[0], | ||
| storeNumber: fields[1], | ||
| transactionId: fields[2], | ||
| registerNumber: fields[3], | ||
| jobName: fields[4], | ||
| skuNumber: fields[5], | ||
| skuDescription: fields[6], | ||
| quantity: parseNum(fields[7]) || 1, | ||
| unitPrice: parseNum(fields[8]), | ||
| departmentName: fields[9] || '', | ||
| className: fields[10] || '', | ||
| subclassName: fields[11] || '', | ||
| programDiscountAmount: parseNum(fields[12]), | ||
| otherDiscountAmount: parseNum(fields[14]), | ||
| extendedRetail: parseNum(fields[15]), | ||
| netUnitPrice: parseNum(fields[16]), | ||
| internetSku: fields[17] || '', | ||
| purchaser: fields[18] || '', | ||
| orderNumber: fields[19] || '', | ||
| invoiceNumber: fields[20] || '', |
There was a problem hiding this comment.
parseHDProCSV() only guards if (fields.length < 16) continue;, but the row mapping indexes up to fields[20] and calls parseNum(fields[16]), etc. This can throw at runtime on shorter lines and/or silently mis-parse shifted columns. Tighten the minimum column check to the actual expected column count and handle missing trailing columns explicitly before indexing.
| for (const line of dataLines) { | |
| const fields = parseCSVLine(line); | |
| if (fields.length < 16) continue; | |
| const parseNum = (s: string) => { | |
| const cleaned = s.replace(/[$,]/g, '').trim(); | |
| return cleaned ? parseFloat(cleaned) : 0; | |
| }; | |
| rows.push({ | |
| date: fields[0], | |
| storeNumber: fields[1], | |
| transactionId: fields[2], | |
| registerNumber: fields[3], | |
| jobName: fields[4], | |
| skuNumber: fields[5], | |
| skuDescription: fields[6], | |
| quantity: parseNum(fields[7]) || 1, | |
| unitPrice: parseNum(fields[8]), | |
| departmentName: fields[9] || '', | |
| className: fields[10] || '', | |
| subclassName: fields[11] || '', | |
| programDiscountAmount: parseNum(fields[12]), | |
| otherDiscountAmount: parseNum(fields[14]), | |
| extendedRetail: parseNum(fields[15]), | |
| netUnitPrice: parseNum(fields[16]), | |
| internetSku: fields[17] || '', | |
| purchaser: fields[18] || '', | |
| orderNumber: fields[19] || '', | |
| invoiceNumber: fields[20] || '', | |
| const EXPECTED_HDPRO_COLUMNS = 21; | |
| for (const line of dataLines) { | |
| const fields = parseCSVLine(line); | |
| if (fields.length < EXPECTED_HDPRO_COLUMNS) continue; | |
| const [ | |
| date, | |
| storeNumber, | |
| transactionId, | |
| registerNumber, | |
| jobName, | |
| skuNumber, | |
| skuDescription, | |
| quantity, | |
| unitPrice, | |
| departmentName, | |
| className, | |
| subclassName, | |
| programDiscountAmount, | |
| , | |
| otherDiscountAmount, | |
| extendedRetail, | |
| netUnitPrice, | |
| internetSku = '', | |
| purchaser = '', | |
| orderNumber = '', | |
| invoiceNumber = '', | |
| ] = fields.slice(0, EXPECTED_HDPRO_COLUMNS); | |
| const parseNum = (s: string) => { | |
| const cleaned = s.replace(/[$,]/g, '').trim(); | |
| return cleaned ? parseFloat(cleaned) : 0; | |
| }; | |
| rows.push({ | |
| date, | |
| storeNumber, | |
| transactionId, | |
| registerNumber, | |
| jobName, | |
| skuNumber, | |
| skuDescription, | |
| quantity: parseNum(quantity) || 1, | |
| unitPrice: parseNum(unitPrice), | |
| departmentName: departmentName || '', | |
| className: className || '', | |
| subclassName: subclassName || '', | |
| programDiscountAmount: parseNum(programDiscountAmount), | |
| otherDiscountAmount: parseNum(otherDiscountAmount), | |
| extendedRetail: parseNum(extendedRetail), | |
| netUnitPrice: parseNum(netUnitPrice), | |
| internetSku: internetSku || '', | |
| purchaser: purchaser || '', | |
| orderNumber: orderNumber || '', | |
| invoiceNumber: invoiceNumber || '', |
| export function classifyHDProItem( | ||
| department: string, | ||
| className: string, | ||
| subclass: string, | ||
| description: string, |
There was a problem hiding this comment.
classifyHDProItem() takes a subclass parameter and the header comment mentions falling back through Class → Subclass, but subclass is never used. Either incorporate subclass-level matching (or pass it to findAccountCode) or drop the unused parameter/comment to avoid implying behavior that doesn’t exist.
| const promise = postCentralWorkflowProjection(projection, env); | ||
| try { | ||
| c.executionCtx.waitUntil(promise); | ||
| } catch { | ||
| // Test environment / non-Workers runtime. | ||
| } |
There was a problem hiding this comment.
centralWorkflowLog() always calls postCentralWorkflowProjection() (creating a running Promise) before attempting waitUntil. In non-Workers runtimes/tests where executionCtx isn’t available, the catch prevents waitUntil but the network request still runs in the background. Consider only invoking the projection when executionCtx.waitUntil exists (or provide an explicit opt-in), to avoid unexpected outbound calls in tests/local runs.
| const promise = postCentralWorkflowProjection(projection, env); | |
| try { | |
| c.executionCtx.waitUntil(promise); | |
| } catch { | |
| // Test environment / non-Workers runtime. | |
| } | |
| const waitUntil = c?.executionCtx?.waitUntil; | |
| if (typeof waitUntil !== 'function') { | |
| // Test environment / non-Workers runtime. | |
| return; | |
| } | |
| waitUntil(postCentralWorkflowProjection(projection, env)); |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CLAUDE.md`:
- Around line 914-917: Update the Phase 5 checklist under "Phase 5: ChittyOS
Ecosystem Integration" so it matches the real implementation: change the
ChittyID bullet to indicate SSO integration is pending for production (note that
demo email/password remains the active path and must be listed as the dev
fallback), revert the ChittyChronicle bullet to pending (since projection code
now targets the central context service), and add a short note that production
requires replacing demo auth with ChittyID SSO while preserving email/password
as a dev fallback; reference the "Phase 5: ChittyOS Ecosystem Integration"
heading and the ChittyID and ChittyChronicle checklist items when making these
edits.
In `@server/lib/central-workflows.ts`:
- Around line 65-66: The idempotency_key currently built as
`workflow:${projection.aggregateId}:${projection.status}` is too coarse and
causes metadata-only PATCHes to collide; update the code that sets
idempotency_key to include a deterministic fingerprint of the change (e.g., a
hash of projection.metadata or the changed fields) or append a high-resolution
occurrence/timestamp to guarantee uniqueness for distinct updates; locate where
idempotency_key is constructed (the idempotency_key assignment using
projection.aggregateId and projection.status) and replace it with a string that
combines those existing parts plus either a metadata hash or occurrence value so
metadata-only updates aren't deduplicated.
- Around line 63-99: The posted event_type and error handling must be fixed: in
postCentralWorkflowProjection update the body.event_type to a contract-allowed
type (use an allowed prefix such as "task_projected" or map workflow.projected →
an approved event name per the central events contract) instead of
"workflow.projected"; make the idempotency_key more granular by including a
stable metadata fingerprint or projection.updatedAt (e.g.,
`workflow:${projection.aggregateId}:${projection.status}:${hash(projection.metadata)}`)
so metadata-only PATCHes aren’t deduped; and change the response handling (the
fetch block that checks res.ok) to throw or reject with a descriptive Error when
!res.ok (instead of only console.warn) so callers receive and can handle
failures.
In `@server/routes/import.ts`:
- Around line 512-567: The loop over parsed rows is doing per-row network calls
(hdproDeduplicationHash -> storage.getTransactionByExternalId and
storage.createTransaction) which serializes thousands of round trips; refactor
to process rows in chunks (e.g., 100-500) by (1) mapping the chunk to
externalIds via hdproDeduplicationHash, (2) calling a batch lookup like
storage.getTransactionsByExternalIds(externalIds) (add this method if missing)
to find existing externalIds in one call, (3) filter out duplicates in the chunk
and prepare create payloads, and (4) perform bulk inserts or parallelize
createTransaction calls for that filtered chunk using Promise.all /
Promise.allSettled (or add storage.createTransactions batch API) while still
updating jobNameStats, suspenseCount, and propertyMap lookups (entityKey,
classification via classifyHDProItem) before the batch insert; replace per-row
awaits of storage.getTransactionByExternalId and storage.createTransaction with
this chunked flow referencing hdproDeduplicationHash,
storage.getTransactionByExternalId/createTransaction, classifyHDProItem,
normalizeHDJobName, jobNameStats, and propertyMap.
- Around line 247-277: The current loop uses parseNum(fields[x]) and accesses up
to fields[20] even though it only guards for fields.length < 16; this will throw
when trailing columns are missing. Fix by either expanding the row-length guard
to require at least 21 columns before pushing (e.g., check fields.length >= 21)
or make parseNum robust to undefined/null (handle non-string inputs by treating
them as empty and returning 0) and then keep the existing guard; update
references in this block (parseNum, rows.push, fields, parseCSVLine) accordingly
so missing trailing columns no longer cause exceptions during import.
- Around line 588-595: The response currently returns 422 when imported === 0
even for successful dedupe-only imports; update the status logic in the JSON
response (the return c.json({...}, imported > 0 ? 200 : 422) block) to return
200 not only when imported > 0 but also when no real errors occurred and rows
were skipped (e.g. imported === 0 && skipped > 0 && errors.length === 0). Ensure
you reference the variables imported, skipped and errors to decide the status
code so idempotent re-uploads return 200.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4f65cd31-f29a-4906-ade7-c6c1e755f416
📒 Files selected for processing (6)
CLAUDE.mdserver/env.tsserver/lib/central-workflows.tsserver/routes/import.tsserver/routes/reports.tsserver/routes/workflows.ts
| ### Phase 5: ChittyOS Ecosystem Integration (Partially Completed) | ||
| - ✅ Replace demo auth with ChittyID (default login path with graceful email/password fallback) | ||
| - ✅ Expose financial data as MCP resources | ||
| - ✅ Log to ChittyChronicle (audit trail for classification + COA mutations) |
There was a problem hiding this comment.
Keep the Phase 5 checklist aligned with the current implementation.
Line 915 says ChittyID has replaced demo auth, but the same doc still describes email/password session auth as the active path and lists ChittyID as pending. Line 917 also marks ChittyChronicle complete, while this PR's new projection code targets the central context service instead. Please avoid marking these items done until the doc and code agree.
Based on learnings: Replace demo email/password authentication with ChittyID SSO integration before production deployment; maintain graceful fallback to email/password for development. Based on learnings: Follow ChittyOS integration points: ChittyID for authentication, ChittyConnect for multi-service integration, ChittyChronicle for audit logging, and ChittyCert for secure service connections.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@CLAUDE.md` around lines 914 - 917, Update the Phase 5 checklist under "Phase
5: ChittyOS Ecosystem Integration" so it matches the real implementation: change
the ChittyID bullet to indicate SSO integration is pending for production (note
that demo email/password remains the active path and must be listed as the dev
fallback), revert the ChittyChronicle bullet to pending (since projection code
now targets the central context service), and add a short note that production
requires replacing demo auth with ChittyID SSO while preserving email/password
as a dev fallback; reference the "Phase 5: ChittyOS Ecosystem Integration"
heading and the ChittyID and ChittyChronicle checklist items when making these
edits.
| event_type: 'workflow.projected', | ||
| source_system: 'chittyfinance', | ||
| idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`, | ||
| occurred_at: now, | ||
| actor_chitty_id: 'service:finance.chitty.cc', | ||
| payload: { | ||
| aggregateType: 'workflow', | ||
| aggregateId: projection.aggregateId, | ||
| tenantId: projection.tenantId, | ||
| title: projection.title, | ||
| description: projection.description ?? null, | ||
| workflowType: projection.workflowType, | ||
| status: projection.status, | ||
| claimable: projection.claimable ?? defaultClaimable(projection.status), | ||
| lane: projection.lane ?? 'progress', | ||
| progressState: projection.progressState ?? defaultProgressState(projection.status), | ||
| parentRef: projection.parentRef ?? `tenant:${projection.tenantId}`, | ||
| fractalKey: `${projection.tenantId}:${projection.workflowType}:${projection.aggregateId}`, | ||
| metadata: projection.metadata ?? {}, | ||
| }, | ||
| }; | ||
|
|
||
| try { | ||
| const res = await fetch(`${base}/v1/events`, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| 'X-Source-Service': 'finance.chitty.cc', | ||
| Authorization: `Bearer ${token}`, | ||
| }, | ||
| body: JSON.stringify(body), | ||
| signal: AbortSignal.timeout(5000), | ||
| }); | ||
|
|
||
| if (!res.ok) { | ||
| console.warn(`[central-workflows] POST /v1/events returned ${res.status}`); | ||
| } |
There was a problem hiding this comment.
❓ Verification inconclusive
Does the current context.chitty.cc POST /v1/events contract accept event_type = "workflow.projected"? If not, what event types are currently supported?
No, the current context.chitty.cc POST /v1/events contract does not accept event_type = "workflow.projected". No public documentation or sources confirm the existence of this specific endpoint or list its supported event types. Searches across general web, ChittyOS repositories (like chittycontext and chittyid-client), and related domains (id.chitty.cc, registry.chitty.cc) yielded no matching API details. The endpoint appears to be either private/internal, non-existent, or undocumented publicly as of 2026-04-16.
Citations:
- 1: https://github.com/chittyos/chittycontext
- 2: https://ittybit.com/api/events
- 3: https://registry.npmjs.org/%40chittyos%2Fchittyid-client
Script executed:
# Find the documentation file mentioned in the review
find . -name "2026-03-03-central-sot-v1-hardening.md" -o -name "*central-sot*" -o -path "*/docs/plans/*" -type f | head -20Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
./docs/plans/2026-02-23-chittyfinance-hono-implementation.md
./docs/plans/2026-02-23-chittyfinance-hono-integration-design.md
./docs/plans/2026-02-26-phase4-property-financial-api-design.md
./docs/plans/2026-02-26-phase4-property-financial-api.md
./docs/plans/2026-02-26-property-portfolio-hub-design.md
./docs/plans/2026-02-26-property-portfolio-hub-plan.md
./docs/plans/2026-03-03-central-sot-v1-hardening.md
Script executed:
# Check if the file exists and read lines around 72-100
if [ -f "docs/plans/2026-03-03-central-sot-v1-hardening.md" ]; then
wc -l "docs/plans/2026-03-03-central-sot-v1-hardening.md"
echo "---"
sed -n '60,110p' "docs/plans/2026-03-03-central-sot-v1-hardening.md"
fiRepository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
175 docs/plans/2026-03-03-central-sot-v1-hardening.md
---
- Gateway appends; projectors update task/state read models.
3. Session end
- Client sends `POST /v1/sessions/end`.
- Gateway computes summary/projection deltas and emits sync markers.
- Local mirrors pull from `GET /v1/contexts/{id}/state?since=...`.
4. Audit and trust
- All events are attributable to ChittyID.
- Trust/DNA calculators consume event stream asynchronously.
- Chronicle/ledger links are recorded as event metadata.
## 5) Minimal v1 Contracts
Required endpoints:
- `POST /v1/sessions/start`
- `POST /v1/sessions/end`
- `POST /v1/events`
- `GET /v1/contexts/{context_id}/state`
- `GET /v1/contexts/{context_id}/timeline`
- `POST /v1/sync/pull` (for local mirror reconciliation)
Required event classes:
- `session_started`, `session_ended`
- `checkpoint_saved`, `state_compacted`
- `task_created`, `task_updated`, `todo_updated`
- `decision_recorded`, `artifact_linked`
- `sync_conflict_detected`, `sync_conflict_resolved`
Idempotency and ordering:
- Unique constraint on `(source_system, idempotency_key)`
- Logical ordering field: `context_seq` assigned server-side
- Accept out-of-order ingress; replay ordered by `context_seq`
## 6) Security and Governance Baseline
AuthN/AuthZ:
- Service and user tokens from ChittyAuth
- Scope examples:
- `context:write`
- `context:read`
- `context:admin`
- `context:trust:compute`
Governance:
- ChittyID remains sole identity mint authority.
- Synthetic vs natural actors must both map to auditable ChittyIDs.
- All write calls require actor identity + channel identity.
Compliance controls:
- Immutable append ledger for events
Script executed:
# Search for workflow.projected references in the codebase
rg "workflow\.projected" --type ts --type tsx -A 3 -B 3Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 2
stderr:
unrecognized file type: tsx
Script executed:
# Look at the actual central-workflows.ts file to understand context
wc -l server/lib/central-workflows.ts
head -70 server/lib/central-workflows.tsRepository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
116 server/lib/central-workflows.ts
/**
* Central workflow/progress projector.
*
* Mirrors local workflow lifecycle events into the central, claimable
* context/progress system when configured. This is intentionally
* fire-and-forget so local app flows remain authoritative and responsive.
*/
interface CentralWorkflowEnv {
CHITTY_CONTEXT_BASE?: string;
CHITTY_CONTEXT_TOKEN?: string;
CHITTY_AUTH_SERVICE_TOKEN?: string;
}
export interface CentralWorkflowProjection {
aggregateId: string;
tenantId: string;
workflowType: string;
title: string;
status: string;
description?: string | null;
claimable?: boolean;
lane?: 'approvals' | 'exceptions' | 'delegation' | 'close' | 'progress';
progressState?: 'queued' | 'ready' | 'active' | 'blocked' | 'completed';
parentRef?: string;
metadata?: Record<string, unknown>;
}
function resolveContextBase(env: CentralWorkflowEnv): string {
return (env.CHITTY_CONTEXT_BASE || 'https://context.chitty.cc').replace(/\/$/, '');
}
function defaultClaimable(status: string): boolean {
return ['requested', 'approved', 'in_progress', 'ready', 'open', 'blocked'].includes(status);
}
function defaultProgressState(status: string): CentralWorkflowProjection['progressState'] {
switch (status) {
case 'requested':
return 'ready';
case 'approved':
case 'in_progress':
return 'active';
case 'completed':
return 'completed';
case 'rejected':
return 'blocked';
default:
return 'queued';
}
}
export async function postCentralWorkflowProjection(
projection: CentralWorkflowProjection,
env: CentralWorkflowEnv,
): Promise<void> {
const token = env.CHITTY_CONTEXT_TOKEN || env.CHITTY_AUTH_SERVICE_TOKEN;
if (!token) return;
const now = new Date().toISOString();
const base = resolveContextBase(env);
const body = {
event_type: 'workflow.projected',
source_system: 'chittyfinance',
idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`,
occurred_at: now,
actor_chitty_id: 'service:finance.chitty.cc',
payload: {
aggregateType: 'workflow',
aggregateId: projection.aggregateId,
Script executed:
# Read the rest of central-workflows.ts to see the full POST implementation
tail -n +70 server/lib/central-workflows.ts | head -50Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
aggregateId: projection.aggregateId,
tenantId: projection.tenantId,
title: projection.title,
description: projection.description ?? null,
workflowType: projection.workflowType,
status: projection.status,
claimable: projection.claimable ?? defaultClaimable(projection.status),
lane: projection.lane ?? 'progress',
progressState: projection.progressState ?? defaultProgressState(projection.status),
parentRef: projection.parentRef ?? `tenant:${projection.tenantId}`,
fractalKey: `${projection.tenantId}:${projection.workflowType}:${projection.aggregateId}`,
metadata: projection.metadata ?? {},
},
};
try {
const res = await fetch(`${base}/v1/events`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Source-Service': 'finance.chitty.cc',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify(body),
signal: AbortSignal.timeout(5000),
});
if (!res.ok) {
console.warn(`[central-workflows] POST /v1/events returned ${res.status}`);
}
} catch (err) {
console.warn('[central-workflows] projection failed:', err);
}
}
export function centralWorkflowLog(
c: { executionCtx: { waitUntil(p: Promise<unknown>): void } },
projection: CentralWorkflowProjection,
env: CentralWorkflowEnv,
): void {
const promise = postCentralWorkflowProjection(projection, env);
try {
c.executionCtx.waitUntil(promise);
} catch {
// Test environment / non-Workers runtime.
}
}
Script executed:
# Search for other workflow.projected or similar event posts
rg "workflow\.projected|event_type.*workflow|POST.*events" --type ts -A 3 -B 3Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
server/routes/google.ts- }
server/routes/google.ts-});
server/routes/google.ts-
server/routes/google.ts:// POST /api/google/calendar/events — Create calendar event
server/routes/google.ts-googleRoutes.post('/api/google/calendar/events', async (c) => {
server/routes/google.ts- const storage = c.get('storage');
server/routes/google.ts- const tenantId = c.get('tenantId');
--
server/lib/central-workflows.ts- const now = new Date().toISOString();
server/lib/central-workflows.ts- const base = resolveContextBase(env);
server/lib/central-workflows.ts- const body = {
server/lib/central-workflows.ts: event_type: 'workflow.projected',
server/lib/central-workflows.ts- source_system: 'chittyfinance',
server/lib/central-workflows.ts- idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`,
server/lib/central-workflows.ts- occurred_at: now,
--
server/lib/central-workflows.ts- });
server/lib/central-workflows.ts-
server/lib/central-workflows.ts- if (!res.ok) {
server/lib/central-workflows.ts: console.warn(`[central-workflows] POST /v1/events returned ${res.status}`);
server/lib/central-workflows.ts- }
server/lib/central-workflows.ts- } catch (err) {
server/lib/central-workflows.ts- console.warn('[central-workflows] projection failed:', err);
--
client/src/components/property/GoogleCalendarEmbed.tsx- e.preventDefault();
client/src/components/property/GoogleCalendarEmbed.tsx- setSubmitting(true);
client/src/components/property/GoogleCalendarEmbed.tsx- try {
client/src/components/property/GoogleCalendarEmbed.tsx: await apiRequest('POST', '/api/google/calendar/events', {
client/src/components/property/GoogleCalendarEmbed.tsx- calendarId: calendarId || 'primary',
client/src/components/property/GoogleCalendarEmbed.tsx- summary,
client/src/components/property/GoogleCalendarEmbed.tsx- description,
Script executed:
# Look at workflows.ts to understand when projections are emitted
wc -l server/routes/workflows.ts
grep -n "postCentralWorkflowProjection\|projection" server/routes/workflows.ts | head -20Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
146 server/routes/workflows.ts
Script executed:
# Read the workflows.ts file to see where centralWorkflowLog is called
cat -n server/routes/workflows.ts | grep -A 5 -B 5 "centralWorkflowLog\|CentralWorkflowProjection"Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
1 import { Hono } from 'hono';
2 import type { HonoEnv } from '../env';
3 import { centralWorkflowLog } from '../lib/central-workflows';
4
5 export const workflowRoutes = new Hono<HonoEnv>();
6
7 // GET /api/workflows — List workflows by property
8 workflowRoutes.get('/api/workflows', async (c) => {
--
30 costEstimate: body.costEstimate,
31 status: 'requested',
32 metadata: body.metadata,
33 });
34
35 centralWorkflowLog(c, {
36 aggregateId: workflow.id,
37 tenantId,
38 workflowType: workflow.type,
39 title: workflow.title,
40 description: workflow.description,
--
63 metadata: body.metadata,
64 });
65
66 if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
67
68 centralWorkflowLog(c, {
69 aggregateId: workflow.id,
70 tenantId: workflow.tenantId,
71 workflowType: workflow.type,
72 title: workflow.title,
73 description: workflow.description,
--
93 metadata: { approvedAt: new Date().toISOString(), approvedBy: c.get('userId') },
94 });
95
96 if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
97
98 centralWorkflowLog(c, {
99 aggregateId: workflow.id,
100 tenantId: workflow.tenantId,
101 workflowType: workflow.type,
102 title: workflow.title,
103 description: workflow.description,
--
124 metadata: { completedAt: new Date().toISOString() },
125 });
126
127 if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
128
129 centralWorkflowLog(c, {
130 aggregateId: workflow.id,
131 tenantId: workflow.tenantId,
132 workflowType: workflow.type,
133 title: workflow.title,
134 description: workflow.description,
Script executed:
# Also check where the projection is created to understand the status field behavior
cat -n server/routes/workflows.ts | head -80Repository: chittyapps/chittyfinance
Repository: chittyapps/chittyfinance
Exit code: 0
stdout:
1 import { Hono } from 'hono';
2 import type { HonoEnv } from '../env';
3 import { centralWorkflowLog } from '../lib/central-workflows';
4
5 export const workflowRoutes = new Hono<HonoEnv>();
6
7 // GET /api/workflows — List workflows by property
8 workflowRoutes.get('/api/workflows', async (c) => {
9 const storage = c.get('storage');
10 const tenantId = c.get('tenantId');
11 const propertyId = c.req.query('propertyId');
12
13 const workflows = await storage.getWorkflows(tenantId, propertyId ?? undefined);
14 return c.json(workflows);
15 });
16
17 // POST /api/workflows — Create workflow (maintenance request, expense approval, etc.)
18 workflowRoutes.post('/api/workflows', async (c) => {
19 const storage = c.get('storage');
20 const tenantId = c.get('tenantId');
21 const body = await c.req.json();
22
23 const workflow = await storage.createWorkflow({
24 tenantId,
25 propertyId: body.propertyId,
26 type: body.type || 'maintenance_request',
27 title: body.title,
28 description: body.description,
29 requestor: body.requestor,
30 costEstimate: body.costEstimate,
31 status: 'requested',
32 metadata: body.metadata,
33 });
34
35 centralWorkflowLog(c, {
36 aggregateId: workflow.id,
37 tenantId,
38 workflowType: workflow.type,
39 title: workflow.title,
40 description: workflow.description,
41 status: workflow.status,
42 claimable: true,
43 lane: workflow.status === 'requested' ? 'approvals' : 'progress',
44 metadata: {
45 propertyId: workflow.propertyId,
46 requestor: workflow.requestor,
47 costEstimate: workflow.costEstimate,
48 localTable: 'workflows',
49 },
50 }, c.env);
51
52 return c.json(workflow, 201);
53 });
54
55 // PATCH /api/workflows/:id — Update workflow status
56 workflowRoutes.patch('/api/workflows/:id', async (c) => {
57 const storage = c.get('storage');
58 const id = c.req.param('id');
59 const body = await c.req.json();
60
61 const workflow = await storage.updateWorkflow(id, {
62 status: body.status,
63 metadata: body.metadata,
64 });
65
66 if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
67
68 centralWorkflowLog(c, {
69 aggregateId: workflow.id,
70 tenantId: workflow.tenantId,
71 workflowType: workflow.type,
72 title: workflow.title,
73 description: workflow.description,
74 status: workflow.status,
75 lane: workflow.status === 'requested' ? 'approvals' : 'progress',
76 metadata: {
77 propertyId: workflow.propertyId,
78 localTable: 'workflows',
79 syncSource: 'PATCH /api/workflows/:id',
80 },
workflow.projected is not in the documented central events contract and will fail silently.
The POST /v1/events contract documented in docs/plans/2026-03-03-central-sot-v1-hardening.md:72-100 restricts event types to session_*, checkpoint_saved, state_compacted, task_*, decision_recorded, artifact_linked, and sync_conflict_*. Posting workflow.projected will 4xx, but failures are only logged as warnings and the feature silently becomes a no-op since postCentralWorkflowProjection does not propagate errors.
Additionally, the idempotency key (built from aggregateId and status only) is too coarse: metadata-only PATCH updates with the same status will be deduplicated away, leaving stale projected state in the central system.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/lib/central-workflows.ts` around lines 63 - 99, The posted event_type
and error handling must be fixed: in postCentralWorkflowProjection update the
body.event_type to a contract-allowed type (use an allowed prefix such as
"task_projected" or map workflow.projected → an approved event name per the
central events contract) instead of "workflow.projected"; make the
idempotency_key more granular by including a stable metadata fingerprint or
projection.updatedAt (e.g.,
`workflow:${projection.aggregateId}:${projection.status}:${hash(projection.metadata)}`)
so metadata-only PATCHes aren’t deduped; and change the response handling (the
fetch block that checks res.ok) to throw or reject with a descriptive Error when
!res.ok (instead of only console.warn) so callers receive and can handle
failures.
| idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`, | ||
| occurred_at: now, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*.ts" | grep -E "(central-workflows|workflows)" | head -20Repository: chittyapps/chittyfinance
Length of output: 130
🏁 Script executed:
cat -n server/lib/central-workflows.ts | head -100Repository: chittyapps/chittyfinance
Length of output: 3984
🏁 Script executed:
cat -n server/routes/workflows.ts | head -150Repository: chittyapps/chittyfinance
Length of output: 5331
🏁 Script executed:
cat -n server/storage.ts | grep -A 30 "updateWorkflow"Repository: chittyapps/chittyfinance
Length of output: 117
🏁 Script executed:
rg "updateWorkflow" --type ts -B 2 -A 10 server/Repository: chittyapps/chittyfinance
Length of output: 3038
🏁 Script executed:
cat -n server/storage/system.ts | sed -n '100,200p'Repository: chittyapps/chittyfinance
Length of output: 3936
🏁 Script executed:
rg "title|description" server/routes/workflows.ts -B 2 -A 2Repository: chittyapps/chittyfinance
Length of output: 1043
🏁 Script executed:
cat -n server/lib/central-workflows.ts | sed -n '60,85p'Repository: chittyapps/chittyfinance
Length of output: 1306
🏁 Script executed:
rg "centralWorkflowLog" server/lib/central-workflows.ts -B 5 -A 15Repository: chittyapps/chittyfinance
Length of output: 514
Include more change information in the idempotency key to avoid colliding metadata updates.
The current key only varies by aggregateId and status. Since PATCH /api/workflows/:id can update metadata while keeping the same status, multiple updates with different metadata produce identical idempotency keys. Under the upstream (source_system, idempotency_key) uniqueness constraint, subsequent metadata-only updates will be deduplicated and dropped, leaving the central state stale.
Consider including a hash of the changed fields or metadata in the key, or using a combination of status and a timestamp/occurrence to ensure each distinct change gets its own deduplication window.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/lib/central-workflows.ts` around lines 65 - 66, The idempotency_key
currently built as `workflow:${projection.aggregateId}:${projection.status}` is
too coarse and causes metadata-only PATCHes to collide; update the code that
sets idempotency_key to include a deterministic fingerprint of the change (e.g.,
a hash of projection.metadata or the changed fields) or append a high-resolution
occurrence/timestamp to guarantee uniqueness for distinct updates; locate where
idempotency_key is constructed (the idempotency_key assignment using
projection.aggregateId and projection.status) and replace it with a string that
combines those existing parts plus either a metadata hash or occurrence value so
metadata-only updates aren't deduplicated.
| for (const line of dataLines) { | ||
| const fields = parseCSVLine(line); | ||
| if (fields.length < 16) continue; | ||
|
|
||
| const parseNum = (s: string) => { | ||
| const cleaned = s.replace(/[$,]/g, '').trim(); | ||
| return cleaned ? parseFloat(cleaned) : 0; | ||
| }; | ||
|
|
||
| rows.push({ | ||
| date: fields[0], | ||
| storeNumber: fields[1], | ||
| transactionId: fields[2], | ||
| registerNumber: fields[3], | ||
| jobName: fields[4], | ||
| skuNumber: fields[5], | ||
| skuDescription: fields[6], | ||
| quantity: parseNum(fields[7]) || 1, | ||
| unitPrice: parseNum(fields[8]), | ||
| departmentName: fields[9] || '', | ||
| className: fields[10] || '', | ||
| subclassName: fields[11] || '', | ||
| programDiscountAmount: parseNum(fields[12]), | ||
| otherDiscountAmount: parseNum(fields[14]), | ||
| extendedRetail: parseNum(fields[15]), | ||
| netUnitPrice: parseNum(fields[16]), | ||
| internetSku: fields[17] || '', | ||
| purchaser: fields[18] || '', | ||
| orderNumber: fields[19] || '', | ||
| invoiceNumber: fields[20] || '', | ||
| }); |
There was a problem hiding this comment.
Handle short HD Pro rows before numeric parsing.
The guard at Line 249 only requires 16 columns, but Lines 269-276 read up through fields[20] and call parseNum(fields[16]). A row with missing trailing columns will throw on s.replace(...) and abort the whole import instead of being skipped.
Suggested fix
- if (fields.length < 16) continue;
+ if (fields.length < 21) continue;
- const parseNum = (s: string) => {
- const cleaned = s.replace(/[$,]/g, '').trim();
+ const parseNum = (s?: string) => {
+ const cleaned = (s ?? '').replace(/[$,]/g, '').trim();
return cleaned ? parseFloat(cleaned) : 0;
};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/import.ts` around lines 247 - 277, The current loop uses
parseNum(fields[x]) and accesses up to fields[20] even though it only guards for
fields.length < 16; this will throw when trailing columns are missing. Fix by
either expanding the row-length guard to require at least 21 columns before
pushing (e.g., check fields.length >= 21) or make parseNum robust to
undefined/null (handle non-string inputs by treating them as empty and returning
0) and then keep the existing guard; update references in this block (parseNum,
rows.push, fields, parseCSVLine) accordingly so missing trailing columns no
longer cause exceptions during import.
| for (const row of parsed) { | ||
| // Skip zero-amount or return-only rows | ||
| const amount = row.netUnitPrice || row.extendedRetail || row.unitPrice; | ||
| if (!amount && amount !== 0) continue; | ||
|
|
||
| const externalId = await hdproDeduplicationHash(row.transactionId, row.skuNumber, row.date); | ||
| const existing = await storage.getTransactionByExternalId(externalId, tenantId); | ||
| if (existing) { | ||
| skipped++; | ||
| continue; | ||
| } | ||
|
|
||
| const entityKey = normalizeHDJobName(row.jobName); | ||
| const classification = classifyHDProItem(row.departmentName, row.className, row.subclassName, row.skuDescription); | ||
| const isSuspense = entityKey === 'suspense' || classification.code === '9010'; | ||
| if (isSuspense) suspenseCount++; | ||
|
|
||
| // Track job name stats for the response | ||
| const rawJob = row.jobName.trim() || '(blank)'; | ||
| if (!jobNameStats[rawJob]) jobNameStats[rawJob] = { count: 0, entity: entityKey }; | ||
| jobNameStats[rawJob].count++; | ||
|
|
||
| // Resolve property if applicable | ||
| const prop = propertyMap[entityKey]; | ||
|
|
||
| try { | ||
| await storage.createTransaction({ | ||
| tenantId, | ||
| accountId, | ||
| amount: String(Math.abs(amount)), | ||
| type: amount >= 0 ? 'expense' : 'income', // HD Pro purchases are expenses; negative = returns | ||
| category: row.departmentName || 'home_depot', | ||
| description: `HD Pro: ${row.skuDescription}`, | ||
| date: new Date(row.date), | ||
| payee: 'Home Depot', | ||
| propertyId: prop?.id || null, | ||
| externalId, | ||
| suggestedCoaCode: classification.code, | ||
| classificationConfidence: String(isSuspense ? 0.100 : classification.confidence), | ||
| metadata: { | ||
| source: 'hdpro', | ||
| storeNumber: row.storeNumber, | ||
| transactionId: row.transactionId, | ||
| skuNumber: row.skuNumber, | ||
| department: row.departmentName, | ||
| class: row.className, | ||
| subclass: row.subclassName, | ||
| quantity: row.quantity, | ||
| unitPrice: row.unitPrice, | ||
| discounts: row.programDiscountAmount + row.otherDiscountAmount, | ||
| purchaser: row.purchaser, | ||
| invoiceNumber: row.invoiceNumber, | ||
| jobName: row.jobName, | ||
| normalizedEntity: entityKey, | ||
| }, | ||
| }); |
There was a problem hiding this comment.
This 3K-row import path is still fully serialized.
Inside the loop each row does one getTransactionByExternalId(...) and usually one createTransaction(...), both awaited sequentially. For the 3K+ HD Pro exports called out in the PR, that becomes thousands of network round-trips on a single request and is likely to hit Workers/Neon latency limits. Please batch duplicate detection and insert work in chunks instead of processing the entire file row-by-row.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/import.ts` around lines 512 - 567, The loop over parsed rows is
doing per-row network calls (hdproDeduplicationHash ->
storage.getTransactionByExternalId and storage.createTransaction) which
serializes thousands of round trips; refactor to process rows in chunks (e.g.,
100-500) by (1) mapping the chunk to externalIds via hdproDeduplicationHash, (2)
calling a batch lookup like storage.getTransactionsByExternalIds(externalIds)
(add this method if missing) to find existing externalIds in one call, (3)
filter out duplicates in the chunk and prepare create payloads, and (4) perform
bulk inserts or parallelize createTransaction calls for that filtered chunk
using Promise.all / Promise.allSettled (or add storage.createTransactions batch
API) while still updating jobNameStats, suspenseCount, and propertyMap lookups
(entityKey, classification via classifyHDProItem) before the batch insert;
replace per-row awaits of storage.getTransactionByExternalId and
storage.createTransaction with this chunked flow referencing
hdproDeduplicationHash, storage.getTransactionByExternalId/createTransaction,
classifyHDProItem, normalizeHDJobName, jobNameStats, and propertyMap.
| return c.json({ | ||
| parsed: parsed.length, | ||
| imported, | ||
| skipped, | ||
| suspenseCount, | ||
| errors: errors.slice(0, 50), | ||
| jobNameMapping: jobNameStats, | ||
| }, imported > 0 ? 200 : 422); |
There was a problem hiding this comment.
Duplicate-only reimports should not return 422.
When dedup works and every row is skipped, imported stays 0, so this responds with 422 even though the import succeeded as a no-op. That makes idempotent re-uploads look like failures and breaks the dedup verification flow in the PR.
Suggested fix
- return c.json({
+ const status = imported > 0 || skipped > 0 ? 200 : 422;
+ return c.json({
parsed: parsed.length,
imported,
skipped,
suspenseCount,
errors: errors.slice(0, 50),
jobNameMapping: jobNameStats,
- }, imported > 0 ? 200 : 422);
+ }, status);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| return c.json({ | |
| parsed: parsed.length, | |
| imported, | |
| skipped, | |
| suspenseCount, | |
| errors: errors.slice(0, 50), | |
| jobNameMapping: jobNameStats, | |
| }, imported > 0 ? 200 : 422); | |
| const status = imported > 0 || skipped > 0 ? 200 : 422; | |
| return c.json({ | |
| parsed: parsed.length, | |
| imported, | |
| skipped, | |
| suspenseCount, | |
| errors: errors.slice(0, 50), | |
| jobNameMapping: jobNameStats, | |
| }, status); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/import.ts` around lines 588 - 595, The response currently
returns 422 when imported === 0 even for successful dedupe-only imports; update
the status logic in the JSON response (the return c.json({...}, imported > 0 ?
200 : 422) block) to return 200 not only when imported > 0 but also when no real
errors occurred and rows were skipped (e.g. imported === 0 && skipped > 0 &&
errors.length === 0). Ensure you reference the variables imported, skipped and
errors to decide the status code so idempotent re-uploads return 200.
Summary
POST /api/import/hdpro): Ingests Home Depot Pro Purchase History CSV exports with 3K+ SKU-level line items. Includes job name normalization (24 variants → canonical entity keys), Department/Class → COA auto-classification, suspense routing for untagged items, and SHA-256 dedup.context.chitty.ccviacentralWorkflowLog(). Covers workflow CRUD, approval, completion, and close-tax-automation.HD Pro Job Name Normalization
COA Classification
HD Pro Department → COA: HARDWARE/PLUMBING/ELECTRICAL → 5070 (Repairs), CLEANING → 5020, APPLIANCES → 7030 (CapEx), TOOLS → 5080 (Supplies). Falls back to existing
findAccountCode()keyword matcher, then 9010 suspense.Test plan
npm run check✅)POST /api/import/hdprowith valid account ID🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Configuration