Skip to content

feat: HD Pro purchase history import + central workflow projections#93

Open
chitcommit wants to merge 2 commits intomainfrom
feat/vendor-import-hdpro
Open

feat: HD Pro purchase history import + central workflow projections#93
chitcommit wants to merge 2 commits intomainfrom
feat/vendor-import-hdpro

Conversation

@chitcommit
Copy link
Copy Markdown
Contributor

@chitcommit chitcommit commented Apr 16, 2026

Summary

  • HD Pro Import (POST /api/import/hdpro): Ingests Home Depot Pro Purchase History CSV exports with 3K+ SKU-level line items. Includes job name normalization (24 variants → canonical entity keys), Department/Class → COA auto-classification, suspense routing for untagged items, and SHA-256 dedup.
  • Central Workflow Projections: Fire-and-forget projection of workflow lifecycle events to context.chitty.cc via centralWorkflowLog(). Covers workflow CRUD, approval, completion, and close-tax-automation.
  • Phase 5 docs: Updated CLAUDE.md to reflect completed ChittyID, MCP resources, and ChittyChronicle items.

HD Pro Job Name Normalization

Raw Job Names Canonical Entity
COZY RENOVATION, COZY RENO, COZY CASTLE RENO, cozy castle reno, COZY RENOCOZY RENOVATION, COZY RRENOVATION cozy-castle
VILLA VISTA REMODEL, VILLA VISTA REMO, VILLA VISTA villa-vista
ARIBIA MGMT aribia-mgmt
CITY STUDIO city-studio
APT ARLENE apt-arlene
SURF 504 cozy-castle (alias)
ADDISON lakeside-loft (alias)
(blank), 0, 00, WG64330247 suspense (9010)

COA Classification

HD Pro Department → COA: HARDWARE/PLUMBING/ELECTRICAL → 5070 (Repairs), CLEANING → 5020, APPLIANCES → 7030 (CapEx), TOOLS → 5080 (Supplies). Falls back to existing findAccountCode() keyword matcher, then 9010 suspense.

Test plan

  • Type check passes (npm run check ✅)
  • Upload HD Pro CSV via POST /api/import/hdpro with valid account ID
  • Verify deduplication on re-import (all rows skipped)
  • Verify suspense count matches ~1,020 untagged + ~41 junk items
  • Verify job name stats in response match normalization expectations
  • Verify central workflow projections fire on workflow CRUD

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • HD Pro import pipeline: users can now import transactions via CSV with automated job name mapping and account classification
    • Central workflow projection system enabling enhanced workflow tracking and centralized event logging capabilities
  • Documentation

    • Phase 5 roadmap progress: authentication system updates, financial data integration completion, and expanded audit logging coverage
  • Configuration

    • New optional environment variables added for central context integration support

chitcommit and others added 2 commits April 16, 2026 05:49
Fire-and-forget projection of workflow lifecycle events (create, update,
approve, complete, close-tax-automation) to the central context service.
Adds CHITTY_CONTEXT_BASE and CHITTY_CONTEXT_TOKEN env bindings.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds POST /api/import/hdpro endpoint that ingests Home Depot Pro Purchase
History CSV exports (3K+ SKU-level line items). Includes:

- Job name normalization map: 24 HD Pro job name variants (including typos
  like COZY RRENOVATION, concatenation errors like COZY RENOCOZY RENOVATION)
  mapped to canonical entity keys
- Department/Class → COA auto-classification: HARDWARE/PLUMBING/ELECTRICAL
  → 5070 (Repairs), CLEANING → 5020, APPLIANCES → 7030 (CapEx), etc.
- Untagged items (33% of dataset) routed to 9010 suspense for L1→L2 review
- SHA-256 dedup by transaction_id + SKU + date
- Full metadata preservation (store #, purchaser, invoice, discounts)
- Response includes job name mapping stats for import validation

Also updates Phase 5 status in CLAUDE.md to reflect completed items.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings April 16, 2026 05:52
@github-actions
Copy link
Copy Markdown
Contributor

@coderabbitai review

Please evaluate:

  • Security implications
  • Credential exposure risk
  • Dependency supply chain concerns
  • Breaking API changes

@cloudflare-workers-and-pages
Copy link
Copy Markdown

Deploying with  Cloudflare Workers  Cloudflare Workers

The latest updates on your project. Learn more about integrating Git with Workers.

Status Name Latest Commit Updated (UTC)
✅ Deployment successful!
View logs
chittyfinance a212965 Apr 16 2026, 05:53 AM

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 16, 2026

📝 Walkthrough

Walkthrough

The pull request introduces a central workflow projection system that mirrors local workflow lifecycle data to an external context/events API, adds HD Pro CSV import capability with job name normalization and COA classification, and integrates projection logging across workflow routes and tax automation reporting.

Changes

Cohort / File(s) Summary
Documentation
CLAUDE.md
Updated Phase 5 roadmap: replaced demo authentication with ChittyID default login (with email/password fallback), marked MCP financial data exposure and ChittyChronicle logging as complete with expanded audit coverage, and noted ChittyCert issuance as in progress.
Environment Configuration
server/env.ts
Added two optional environment variables (CHITTY_CONTEXT_BASE and CHITTY_CONTEXT_TOKEN) to the exported Env interface for central context integration.
Central Workflow Infrastructure
server/lib/central-workflows.ts
New module implementing workflow projection system with CentralWorkflowProjection type and functions postCentralWorkflowProjection() (POSTs to external context API with timeout/error handling) and centralWorkflowLog() (schedules async posting via executionCtx.waitUntil()).
Workflow Route Integration
server/routes/workflows.ts, server/routes/reports.ts
Added centralWorkflowLog() calls across four mutating workflow routes (create, update, approve, complete) and tax automation handler, capturing workflow identity, status, lane classification, claimability, and route-specific metadata (propertyId, requestor, costEstimate, syncSource).
HD Pro Import Pipeline
server/routes/import.ts
Added new POST /api/import/hdpro route with public functions for normalizeHDJobName(), classifyHDProItem(), and parseHDProCSV(), plus internal deduplication logic; processes CSV rows with COA classification and suspense flagging, returning import statistics with job-name mapping.

Sequence Diagram(s)

sequenceDiagram
    participant LocalApp as Local App
    participant CentralLog as centralWorkflowLog()
    participant CentralPost as postCentralWorkflowProjection()
    participant CentralAPI as Central Context API
    
    LocalApp->>LocalApp: Create/Update/Approve/Complete Workflow
    LocalApp->>CentralLog: Invoke with projection data
    CentralLog->>CentralPost: Call async projection
    CentralLog->>LocalApp: Schedule via executionCtx.waitUntil()
    CentralPost->>CentralPost: Resolve base URL & auth token
    alt Token Present
        CentralPost->>CentralPost: Build event payload (workflow.projected)
        CentralPost->>CentralAPI: POST /v1/events (5s timeout)
        CentralAPI-->>CentralPost: 2xx or non-2xx response
        CentralPost->>CentralPost: Log warnings on non-2xx
    else No Token
        CentralPost->>CentralPost: Return early (no-op)
    end
    CentralPost->>CentralLog: Promise resolves/rejects
    LocalApp->>LocalApp: Continue (non-blocking)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 Workflows dance through central streams,
Projections flow in HD dreams,
Routes log their every stride,
As classifications guide the tide!
New features hop and gently sync—
Code hops forward, faster than a blink!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the two main features added: HD Pro purchase history import and central workflow projections, matching the primary changes across the changeset.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/vendor-import-hdpro

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@claude
Copy link
Copy Markdown

claude bot commented Apr 16, 2026

Code Review — PR #93: HD Pro Import + Central Workflow Projections

Overview

This PR adds two distinct features: (1) a Home Depot Pro Purchase History CSV importer with job name normalization and COA auto-classification, and (2) fire-and-forget workflow projection to context.chitty.cc. The overall architecture is sound and follows project conventions, but there are a few correctness issues and one significant performance problem worth addressing.


🔴 Critical

N+1 Query Pattern in HD Pro Import (server/routes/import.ts)

The import loop calls storage.getTransactionByExternalId() and storage.createTransaction() once per CSV row. With the 3K+ line items mentioned in the PR description, this is 6,000+ individual database round-trips:

for (const row of parsed) {
  const existing = await storage.getTransactionByExternalId(externalId, tenantId); // ← per row
  // ...
  await storage.createTransaction({ ... }); // ← per row
}

Recommended fix: Pre-fetch all existing external IDs for this account into a Set<string> before the loop, then batch inserts with storage.createTransactions() (or accumulate and bulk-insert). This is the same pattern used in the TurboTenant importer and reduces DB round-trips from O(n) to O(1) reads + O(1) bulk write.


🟡 Bugs / Correctness

Dead Code: entityTenantMap is never read

entityTenantMap is declared, partially populated from allTenants, and then... never used. All transactions are written with the caller's tenantId from context regardless of entity key. Either this mapping should drive tenantId assignment (so cozy-castle transactions go to the cozy-castle tenant), or the block should be removed to avoid confusion.

// This map is built but never consumed:
const entityTenantMap: Record<EntityKey, string | null> = { ... };
for (const t of allTenants) { /* populates aribia-mgmt, city-studio, apt-arlene */ }
// ... then propertyMap is used for propertyId but entityTenantMap is ignored

runId collision in reports.ts

const runId = `close-${Date.now()}`;

Date.now() has millisecond precision and is deterministic in Workers — concurrent requests could produce the same runId. Since this is also used as the aggregateId for the central projection (which uses it as an idempotency key), collisions could cause silent deduplication. Use crypto.randomUUID() instead.

CSV field index 13 is silently skipped

In parseHDProCSV:

programDiscountAmount: parseNum(fields[12]),
otherDiscountAmount: parseNum(fields[14]),  // index 13 is skipped

If the actual HD Pro export has a field at index 13 (e.g., a tax or fee column), every field from index 13 onward is offset by one, silently misassigning extendedRetail, netUnitPrice, etc. Please verify the actual column order against a real HD Pro CSV export and document the skipped column.

surf-504 and addison are phantom EntityKey members

These are listed in the EntityKey union and entityTenantMap initializer but normalizeHDJobName() never returns them — the map resolves them to cozy-castle and lakeside-loft directly. They're unreachable as entity keys and add noise. Remove them from the type and map, or document why they're reserved.


🟡 Reliability

Unparsed date format

date: new Date(row.date),

US CSV date formats like MM/DD/YYYY are parsed inconsistently across JS runtimes. V8 handles it today, but this is not spec-guaranteed. Use a split-and-construct approach or a known-safe format:

// Parse MM/DD/YYYY safely
const [m, d, y] = row.date.split('/');
date: new Date(Number(y), Number(m) - 1, Number(d)),

No request body size limit

A malformed or malicious upload could send a very large body that exhausts Worker memory before parseHDProCSV() runs. Cloudflare Workers has a 100MB body limit, but adding an explicit check (if (body.length > 20_000_000) return c.json({ error: '...' }, 413)) is defensive.


🟢 What's Working Well

  • Trust path respected: Using suggestedCoaCode (not coaCode) for auto-classification correctly follows the L1 write pattern documented in CLAUDE.md. ✅
  • Fire-and-forget correctness: centralWorkflowLog uses executionCtx.waitUntil() properly — the 5-second AbortSignal.timeout prevents runaway fetches, and the catch-and-warn pattern keeps local flows authoritative. ✅
  • Idempotent dedup: SHA-256 hash of transactionId|sku|date is a clean, stable dedup key. ✅
  • Exported pure functions: normalizeHDJobName and classifyHDProItem are exported and stateless — easy to unit test when tests are added. ✅
  • Error collection: errors.slice(0, 50) prevents bloated responses on bulk failures. ✅
  • claimable: false on complete: Correctly gates the completed-workflow projection from being re-claimed. ✅

Minor Suggestions

  • classifyHDProItem iterates Object.entries(HD_CLASS_COA) with includes() on every call — fine for current size but a prefix-trie or direct lookup would scale better if the lookup table grows.
  • The HD_JOB_NAME_MAP is hardcoded to property names. Consider loading aliases from a DB config table so property renames don't require a code deploy.
  • Add at minimum a smoke test for normalizeHDJobName covering the typo variants (cozy rrenovation, cozy renocozy renovation, blank, 0) since these are the most brittle part of the importer.
  • metadata.localRoute in the close-tax projection — if context.chitty.cc has a schema for this field, confirm the key name matches (localRoute vs localPath vs sourceRoute).

Summary

The central-workflows module is clean and ready to merge. The HD Pro importer logic (normalization, classification, parsing) is well-structured, but the N+1 query loop should be addressed before merging given the stated 3K+ row import volume — this will noticeably time out or exhaust connection pools in production. The dead entityTenantMap and field-offset question are lower severity but worth clarifying.

🤖 Reviewed with Claude Code

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a212965309

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread server/routes/import.ts

for (const line of dataLines) {
const fields = parseCSVLine(line);
if (fields.length < 16) continue;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Enforce full HD Pro column count before parsing

parseHDProCSV only skips rows with fewer than 16 fields, but the code then dereferences through fields[20] and calls parseNum(fields[16]). If an export row has 16–20 columns (for example when trailing optional columns are omitted), parseNum receives undefined and throws on .replace, causing the whole import request to fail instead of handling the bad row gracefully. Validate for the full expected width (or guard each access) before parsing.

Useful? React with 👍 / 👎.

Comment thread server/routes/import.ts

for (const row of parsed) {
// Skip zero-amount or return-only rows
const amount = row.netUnitPrice || row.extendedRetail || row.unitPrice;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Derive HD Pro amount from line total, not unit price

The import amount selection uses row.netUnitPrice first, which is a per-unit number, and does not apply row.quantity. For any line with quantity greater than 1, the recorded transaction amount is understated (e.g., qty 3 at $12 is imported as $12), which directly corrupts expense totals and downstream financial reporting. Amount should be based on the line total (or quantity * netUnitPrice) rather than raw unit price.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds ingestion + normalization support for Home Depot Pro Purchase History exports, and introduces a “fire-and-forget” projector to mirror local workflow lifecycle events into the central context.chitty.cc workflow/progress system.

Changes:

  • Added POST /api/import/hdpro with job-name normalization, department/class→COA suggestions, suspense routing, and SHA-256 deduplication.
  • Added centralWorkflowLog() projector and hooked it into workflow CRUD/status routes plus close-tax-automation.
  • Updated environment bindings and Phase 5 documentation.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
server/routes/workflows.ts Emits central workflow projections on create/update/approve/complete.
server/routes/reports.ts Emits a central projection for the close-tax-automation run (runId).
server/routes/import.ts Implements HD Pro CSV parsing, normalization/classification helpers, and the new import endpoint.
server/lib/central-workflows.ts New projector utility posting workflow events to context.chitty.cc.
server/env.ts Adds optional CHITTY_CONTEXT_* bindings.
CLAUDE.md Updates Phase 5 status text.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +68 to +72
centralWorkflowLog(c, {
aggregateId: workflow.id,
tenantId: workflow.tenantId,
workflowType: workflow.type,
title: workflow.title,
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This endpoint calls storage.updateWorkflow(id, ...), but SystemStorage.updateWorkflow() updates by workflow ID without a tenant WHERE clause. Since this route also doesn’t validate the workflow’s tenantId against c.get('tenantId'), a caller who knows another tenant’s workflow ID could mutate it. Add an explicit tenant check before projecting/logging (or change the storage method to require tenantId).

Copilot uses AI. Check for mistakes.
Comment on lines +129 to +135
centralWorkflowLog(c, {
aggregateId: workflow.id,
tenantId: workflow.tenantId,
workflowType: workflow.type,
title: workflow.title,
description: workflow.description,
status: workflow.status,
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same tenant-scoping issue here as the other workflow mutations: updateWorkflow(id, ...) is not constrained by tenant in SystemStorage, and this route doesn’t validate the workflow belongs to c.get('tenantId') before completing it. Add a tenant ownership check (or require tenantId in the update query) to prevent cross-tenant completion by guessed IDs.

Copilot uses AI. Check for mistakes.
Comment thread server/routes/import.ts
/**
* Parse HD Pro Purchase History CSV.
* HD Pro CSVs have 6 metadata rows (company, phone, source, date range, export date, blank)
* followed by the actual header + data rows. Encoding is latin-1.
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment says HD Pro exports are latin-1 encoded, but the endpoint reads the body via c.req.text() (UTF-8 decoding). If the input really is latin-1 this will corrupt non-ASCII characters; if it isn’t, the comment is misleading. Either decode arrayBuffer() with TextDecoder('latin1') or update the comment to match the actual behavior.

Suggested change
* followed by the actual header + data rows. Encoding is latin-1.
* followed by the actual header + data rows.
* This parser expects already-decoded text; any request/body character encoding must be handled
* before calling this function.

Copilot uses AI. Check for mistakes.
Comment thread server/routes/import.ts
Comment on lines +466 to +484
// Build entity→tenant mapping by loading all tenants the caller can see
const allTenants = await storage.getTenants();
const entityTenantMap: Record<EntityKey, string | null> = {
'aribia-mgmt': null,
'city-studio': null,
'apt-arlene': null,
'cozy-castle': null,
'lakeside-loft': null,
'villa-vista': null,
'personal-nick': null,
'colombia': null,
'surf-504': null,
'addison': null,
'chitty-consumables': null,
'chitty-furnishings': null,
'chitty-tools': null,
'suspense': null,
};

Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This route reads all active tenants via storage.getTenants() (which is not membership-scoped in SystemStorage) and builds entityTenantMap, but the map is never used afterwards. This is both unnecessary cross-tenant data access and wasted work; remove it, or switch to a user-scoped query (e.g., getUserTenants(c.get('userId'))) if you actually need tenant resolution here.

Suggested change
// Build entity→tenant mapping by loading all tenants the caller can see
const allTenants = await storage.getTenants();
const entityTenantMap: Record<EntityKey, string | null> = {
'aribia-mgmt': null,
'city-studio': null,
'apt-arlene': null,
'cozy-castle': null,
'lakeside-loft': null,
'villa-vista': null,
'personal-nick': null,
'colombia': null,
'surf-504': null,
'addison': null,
'chitty-consumables': null,
'chitty-furnishings': null,
'chitty-tools': null,
'suspense': null,
};

Copilot uses AI. Check for mistakes.
Comment thread server/routes/import.ts
Comment on lines +513 to +515
// Skip zero-amount or return-only rows
const amount = row.netUnitPrice || row.extendedRetail || row.unitPrice;
if (!amount && amount !== 0) continue;
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The amount selection uses row.netUnitPrice || row.extendedRetail || row.unitPrice, which treats 0 as “missing” and can pick the wrong fallback. Also, if netUnitPrice/unitPrice are per-unit values, the current logic ignores quantity and discounts when computing the transaction amount. Prefer nullish checks (??) and compute a line total deterministically from the CSV’s intended total fields (e.g., extended totals and discount columns, or unit * quantity).

Copilot uses AI. Check for mistakes.
Comment thread server/routes/import.ts
Comment on lines +443 to +448
// POST /api/import/hdpro — import Home Depot Pro Purchase History CSV
// Requires X-Account-ID header or ?accountId query param
importRoutes.post('/api/import/hdpro', async (c) => {
const storage = c.get('storage');
const tenantId = c.get('tenantId');

Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There’s good scenario coverage for /api/import/turbotenant, but the new /api/import/hdpro endpoint and its helpers (parseHDProCSV, normalizeHDJobName, classifyHDProItem, dedup hash) have no tests. Adding a minimal scenario test (happy path + dedup + missing accountId + malformed CSV) would help prevent regressions, especially given the column indexing and sign/amount conventions.

Copilot uses AI. Check for mistakes.
Comment on lines +98 to +104
centralWorkflowLog(c, {
aggregateId: workflow.id,
tenantId: workflow.tenantId,
workflowType: workflow.type,
title: workflow.title,
description: workflow.description,
status: workflow.status,
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same tenant-scoping issue here: updateWorkflow(id, ...) isn’t tenant-filtered in storage, and this route doesn’t verify workflow.tenantId === c.get('tenantId') before returning/projecting. Please enforce tenant ownership for approve actions to prevent cross-tenant mutation by ID.

Copilot uses AI. Check for mistakes.
Comment thread server/routes/import.ts
Comment on lines +246 to +276

for (const line of dataLines) {
const fields = parseCSVLine(line);
if (fields.length < 16) continue;

const parseNum = (s: string) => {
const cleaned = s.replace(/[$,]/g, '').trim();
return cleaned ? parseFloat(cleaned) : 0;
};

rows.push({
date: fields[0],
storeNumber: fields[1],
transactionId: fields[2],
registerNumber: fields[3],
jobName: fields[4],
skuNumber: fields[5],
skuDescription: fields[6],
quantity: parseNum(fields[7]) || 1,
unitPrice: parseNum(fields[8]),
departmentName: fields[9] || '',
className: fields[10] || '',
subclassName: fields[11] || '',
programDiscountAmount: parseNum(fields[12]),
otherDiscountAmount: parseNum(fields[14]),
extendedRetail: parseNum(fields[15]),
netUnitPrice: parseNum(fields[16]),
internetSku: fields[17] || '',
purchaser: fields[18] || '',
orderNumber: fields[19] || '',
invoiceNumber: fields[20] || '',
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parseHDProCSV() only guards if (fields.length < 16) continue;, but the row mapping indexes up to fields[20] and calls parseNum(fields[16]), etc. This can throw at runtime on shorter lines and/or silently mis-parse shifted columns. Tighten the minimum column check to the actual expected column count and handle missing trailing columns explicitly before indexing.

Suggested change
for (const line of dataLines) {
const fields = parseCSVLine(line);
if (fields.length < 16) continue;
const parseNum = (s: string) => {
const cleaned = s.replace(/[$,]/g, '').trim();
return cleaned ? parseFloat(cleaned) : 0;
};
rows.push({
date: fields[0],
storeNumber: fields[1],
transactionId: fields[2],
registerNumber: fields[3],
jobName: fields[4],
skuNumber: fields[5],
skuDescription: fields[6],
quantity: parseNum(fields[7]) || 1,
unitPrice: parseNum(fields[8]),
departmentName: fields[9] || '',
className: fields[10] || '',
subclassName: fields[11] || '',
programDiscountAmount: parseNum(fields[12]),
otherDiscountAmount: parseNum(fields[14]),
extendedRetail: parseNum(fields[15]),
netUnitPrice: parseNum(fields[16]),
internetSku: fields[17] || '',
purchaser: fields[18] || '',
orderNumber: fields[19] || '',
invoiceNumber: fields[20] || '',
const EXPECTED_HDPRO_COLUMNS = 21;
for (const line of dataLines) {
const fields = parseCSVLine(line);
if (fields.length < EXPECTED_HDPRO_COLUMNS) continue;
const [
date,
storeNumber,
transactionId,
registerNumber,
jobName,
skuNumber,
skuDescription,
quantity,
unitPrice,
departmentName,
className,
subclassName,
programDiscountAmount,
,
otherDiscountAmount,
extendedRetail,
netUnitPrice,
internetSku = '',
purchaser = '',
orderNumber = '',
invoiceNumber = '',
] = fields.slice(0, EXPECTED_HDPRO_COLUMNS);
const parseNum = (s: string) => {
const cleaned = s.replace(/[$,]/g, '').trim();
return cleaned ? parseFloat(cleaned) : 0;
};
rows.push({
date,
storeNumber,
transactionId,
registerNumber,
jobName,
skuNumber,
skuDescription,
quantity: parseNum(quantity) || 1,
unitPrice: parseNum(unitPrice),
departmentName: departmentName || '',
className: className || '',
subclassName: subclassName || '',
programDiscountAmount: parseNum(programDiscountAmount),
otherDiscountAmount: parseNum(otherDiscountAmount),
extendedRetail: parseNum(extendedRetail),
netUnitPrice: parseNum(netUnitPrice),
internetSku: internetSku || '',
purchaser: purchaser || '',
orderNumber: orderNumber || '',
invoiceNumber: invoiceNumber || '',

Copilot uses AI. Check for mistakes.
Comment thread server/routes/import.ts
Comment on lines +166 to +170
export function classifyHDProItem(
department: string,
className: string,
subclass: string,
description: string,
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

classifyHDProItem() takes a subclass parameter and the header comment mentions falling back through Class → Subclass, but subclass is never used. Either incorporate subclass-level matching (or pass it to findAccountCode) or drop the unused parameter/comment to avoid implying behavior that doesn’t exist.

Copilot uses AI. Check for mistakes.
Comment on lines +110 to +115
const promise = postCentralWorkflowProjection(projection, env);
try {
c.executionCtx.waitUntil(promise);
} catch {
// Test environment / non-Workers runtime.
}
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

centralWorkflowLog() always calls postCentralWorkflowProjection() (creating a running Promise) before attempting waitUntil. In non-Workers runtimes/tests where executionCtx isn’t available, the catch prevents waitUntil but the network request still runs in the background. Consider only invoking the projection when executionCtx.waitUntil exists (or provide an explicit opt-in), to avoid unexpected outbound calls in tests/local runs.

Suggested change
const promise = postCentralWorkflowProjection(projection, env);
try {
c.executionCtx.waitUntil(promise);
} catch {
// Test environment / non-Workers runtime.
}
const waitUntil = c?.executionCtx?.waitUntil;
if (typeof waitUntil !== 'function') {
// Test environment / non-Workers runtime.
return;
}
waitUntil(postCentralWorkflowProjection(projection, env));

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@CLAUDE.md`:
- Around line 914-917: Update the Phase 5 checklist under "Phase 5: ChittyOS
Ecosystem Integration" so it matches the real implementation: change the
ChittyID bullet to indicate SSO integration is pending for production (note that
demo email/password remains the active path and must be listed as the dev
fallback), revert the ChittyChronicle bullet to pending (since projection code
now targets the central context service), and add a short note that production
requires replacing demo auth with ChittyID SSO while preserving email/password
as a dev fallback; reference the "Phase 5: ChittyOS Ecosystem Integration"
heading and the ChittyID and ChittyChronicle checklist items when making these
edits.

In `@server/lib/central-workflows.ts`:
- Around line 65-66: The idempotency_key currently built as
`workflow:${projection.aggregateId}:${projection.status}` is too coarse and
causes metadata-only PATCHes to collide; update the code that sets
idempotency_key to include a deterministic fingerprint of the change (e.g., a
hash of projection.metadata or the changed fields) or append a high-resolution
occurrence/timestamp to guarantee uniqueness for distinct updates; locate where
idempotency_key is constructed (the idempotency_key assignment using
projection.aggregateId and projection.status) and replace it with a string that
combines those existing parts plus either a metadata hash or occurrence value so
metadata-only updates aren't deduplicated.
- Around line 63-99: The posted event_type and error handling must be fixed: in
postCentralWorkflowProjection update the body.event_type to a contract-allowed
type (use an allowed prefix such as "task_projected" or map workflow.projected →
an approved event name per the central events contract) instead of
"workflow.projected"; make the idempotency_key more granular by including a
stable metadata fingerprint or projection.updatedAt (e.g.,
`workflow:${projection.aggregateId}:${projection.status}:${hash(projection.metadata)}`)
so metadata-only PATCHes aren’t deduped; and change the response handling (the
fetch block that checks res.ok) to throw or reject with a descriptive Error when
!res.ok (instead of only console.warn) so callers receive and can handle
failures.

In `@server/routes/import.ts`:
- Around line 512-567: The loop over parsed rows is doing per-row network calls
(hdproDeduplicationHash -> storage.getTransactionByExternalId and
storage.createTransaction) which serializes thousands of round trips; refactor
to process rows in chunks (e.g., 100-500) by (1) mapping the chunk to
externalIds via hdproDeduplicationHash, (2) calling a batch lookup like
storage.getTransactionsByExternalIds(externalIds) (add this method if missing)
to find existing externalIds in one call, (3) filter out duplicates in the chunk
and prepare create payloads, and (4) perform bulk inserts or parallelize
createTransaction calls for that filtered chunk using Promise.all /
Promise.allSettled (or add storage.createTransactions batch API) while still
updating jobNameStats, suspenseCount, and propertyMap lookups (entityKey,
classification via classifyHDProItem) before the batch insert; replace per-row
awaits of storage.getTransactionByExternalId and storage.createTransaction with
this chunked flow referencing hdproDeduplicationHash,
storage.getTransactionByExternalId/createTransaction, classifyHDProItem,
normalizeHDJobName, jobNameStats, and propertyMap.
- Around line 247-277: The current loop uses parseNum(fields[x]) and accesses up
to fields[20] even though it only guards for fields.length < 16; this will throw
when trailing columns are missing. Fix by either expanding the row-length guard
to require at least 21 columns before pushing (e.g., check fields.length >= 21)
or make parseNum robust to undefined/null (handle non-string inputs by treating
them as empty and returning 0) and then keep the existing guard; update
references in this block (parseNum, rows.push, fields, parseCSVLine) accordingly
so missing trailing columns no longer cause exceptions during import.
- Around line 588-595: The response currently returns 422 when imported === 0
even for successful dedupe-only imports; update the status logic in the JSON
response (the return c.json({...}, imported > 0 ? 200 : 422) block) to return
200 not only when imported > 0 but also when no real errors occurred and rows
were skipped (e.g. imported === 0 && skipped > 0 && errors.length === 0). Ensure
you reference the variables imported, skipped and errors to decide the status
code so idempotent re-uploads return 200.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4f65cd31-f29a-4906-ade7-c6c1e755f416

📥 Commits

Reviewing files that changed from the base of the PR and between d76f70b and a212965.

📒 Files selected for processing (6)
  • CLAUDE.md
  • server/env.ts
  • server/lib/central-workflows.ts
  • server/routes/import.ts
  • server/routes/reports.ts
  • server/routes/workflows.ts

Comment thread CLAUDE.md
Comment on lines +914 to +917
### Phase 5: ChittyOS Ecosystem Integration (Partially Completed)
- Replace demo auth with ChittyID (default login path with graceful email/password fallback)
- Expose financial data as MCP resources
- Log to ChittyChronicle (audit trail for classification + COA mutations)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep the Phase 5 checklist aligned with the current implementation.

Line 915 says ChittyID has replaced demo auth, but the same doc still describes email/password session auth as the active path and lists ChittyID as pending. Line 917 also marks ChittyChronicle complete, while this PR's new projection code targets the central context service instead. Please avoid marking these items done until the doc and code agree.

Based on learnings: Replace demo email/password authentication with ChittyID SSO integration before production deployment; maintain graceful fallback to email/password for development. Based on learnings: Follow ChittyOS integration points: ChittyID for authentication, ChittyConnect for multi-service integration, ChittyChronicle for audit logging, and ChittyCert for secure service connections.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@CLAUDE.md` around lines 914 - 917, Update the Phase 5 checklist under "Phase
5: ChittyOS Ecosystem Integration" so it matches the real implementation: change
the ChittyID bullet to indicate SSO integration is pending for production (note
that demo email/password remains the active path and must be listed as the dev
fallback), revert the ChittyChronicle bullet to pending (since projection code
now targets the central context service), and add a short note that production
requires replacing demo auth with ChittyID SSO while preserving email/password
as a dev fallback; reference the "Phase 5: ChittyOS Ecosystem Integration"
heading and the ChittyID and ChittyChronicle checklist items when making these
edits.

Comment on lines +63 to +99
event_type: 'workflow.projected',
source_system: 'chittyfinance',
idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`,
occurred_at: now,
actor_chitty_id: 'service:finance.chitty.cc',
payload: {
aggregateType: 'workflow',
aggregateId: projection.aggregateId,
tenantId: projection.tenantId,
title: projection.title,
description: projection.description ?? null,
workflowType: projection.workflowType,
status: projection.status,
claimable: projection.claimable ?? defaultClaimable(projection.status),
lane: projection.lane ?? 'progress',
progressState: projection.progressState ?? defaultProgressState(projection.status),
parentRef: projection.parentRef ?? `tenant:${projection.tenantId}`,
fractalKey: `${projection.tenantId}:${projection.workflowType}:${projection.aggregateId}`,
metadata: projection.metadata ?? {},
},
};

try {
const res = await fetch(`${base}/v1/events`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-Source-Service': 'finance.chitty.cc',
Authorization: `Bearer ${token}`,
},
body: JSON.stringify(body),
signal: AbortSignal.timeout(5000),
});

if (!res.ok) {
console.warn(`[central-workflows] POST /v1/events returned ${res.status}`);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

❓ Verification inconclusive

Does the current context.chitty.cc POST /v1/events contract accept event_type = "workflow.projected"? If not, what event types are currently supported?


No, the current context.chitty.cc POST /v1/events contract does not accept event_type = "workflow.projected". No public documentation or sources confirm the existence of this specific endpoint or list its supported event types. Searches across general web, ChittyOS repositories (like chittycontext and chittyid-client), and related domains (id.chitty.cc, registry.chitty.cc) yielded no matching API details. The endpoint appears to be either private/internal, non-existent, or undocumented publicly as of 2026-04-16.

Citations:


Script executed:

# Find the documentation file mentioned in the review
find . -name "2026-03-03-central-sot-v1-hardening.md" -o -name "*central-sot*" -o -path "*/docs/plans/*" -type f | head -20

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

./docs/plans/2026-02-23-chittyfinance-hono-implementation.md
./docs/plans/2026-02-23-chittyfinance-hono-integration-design.md
./docs/plans/2026-02-26-phase4-property-financial-api-design.md
./docs/plans/2026-02-26-phase4-property-financial-api.md
./docs/plans/2026-02-26-property-portfolio-hub-design.md
./docs/plans/2026-02-26-property-portfolio-hub-plan.md
./docs/plans/2026-03-03-central-sot-v1-hardening.md

Script executed:

# Check if the file exists and read lines around 72-100
if [ -f "docs/plans/2026-03-03-central-sot-v1-hardening.md" ]; then
  wc -l "docs/plans/2026-03-03-central-sot-v1-hardening.md"
  echo "---"
  sed -n '60,110p' "docs/plans/2026-03-03-central-sot-v1-hardening.md"
fi

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

175 docs/plans/2026-03-03-central-sot-v1-hardening.md
---
- Gateway appends; projectors update task/state read models.

3. Session end
- Client sends `POST /v1/sessions/end`.
- Gateway computes summary/projection deltas and emits sync markers.
- Local mirrors pull from `GET /v1/contexts/{id}/state?since=...`.

4. Audit and trust
- All events are attributable to ChittyID.
- Trust/DNA calculators consume event stream asynchronously.
- Chronicle/ledger links are recorded as event metadata.

## 5) Minimal v1 Contracts

Required endpoints:
- `POST /v1/sessions/start`
- `POST /v1/sessions/end`
- `POST /v1/events`
- `GET /v1/contexts/{context_id}/state`
- `GET /v1/contexts/{context_id}/timeline`
- `POST /v1/sync/pull` (for local mirror reconciliation)

Required event classes:
- `session_started`, `session_ended`
- `checkpoint_saved`, `state_compacted`
- `task_created`, `task_updated`, `todo_updated`
- `decision_recorded`, `artifact_linked`
- `sync_conflict_detected`, `sync_conflict_resolved`

Idempotency and ordering:
- Unique constraint on `(source_system, idempotency_key)`
- Logical ordering field: `context_seq` assigned server-side
- Accept out-of-order ingress; replay ordered by `context_seq`

## 6) Security and Governance Baseline

AuthN/AuthZ:
- Service and user tokens from ChittyAuth
- Scope examples:
  - `context:write`
  - `context:read`
  - `context:admin`
  - `context:trust:compute`

Governance:
- ChittyID remains sole identity mint authority.
- Synthetic vs natural actors must both map to auditable ChittyIDs.
- All write calls require actor identity + channel identity.

Compliance controls:
- Immutable append ledger for events

Script executed:

# Search for workflow.projected references in the codebase
rg "workflow\.projected" --type ts --type tsx -A 3 -B 3

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 2

stderr:

unrecognized file type: tsx

Script executed:

# Look at the actual central-workflows.ts file to understand context
wc -l server/lib/central-workflows.ts
head -70 server/lib/central-workflows.ts

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

116 server/lib/central-workflows.ts
/**
 * Central workflow/progress projector.
 *
 * Mirrors local workflow lifecycle events into the central, claimable
 * context/progress system when configured. This is intentionally
 * fire-and-forget so local app flows remain authoritative and responsive.
 */

interface CentralWorkflowEnv {
  CHITTY_CONTEXT_BASE?: string;
  CHITTY_CONTEXT_TOKEN?: string;
  CHITTY_AUTH_SERVICE_TOKEN?: string;
}

export interface CentralWorkflowProjection {
  aggregateId: string;
  tenantId: string;
  workflowType: string;
  title: string;
  status: string;
  description?: string | null;
  claimable?: boolean;
  lane?: 'approvals' | 'exceptions' | 'delegation' | 'close' | 'progress';
  progressState?: 'queued' | 'ready' | 'active' | 'blocked' | 'completed';
  parentRef?: string;
  metadata?: Record<string, unknown>;
}

function resolveContextBase(env: CentralWorkflowEnv): string {
  return (env.CHITTY_CONTEXT_BASE || 'https://context.chitty.cc').replace(/\/$/, '');
}

function defaultClaimable(status: string): boolean {
  return ['requested', 'approved', 'in_progress', 'ready', 'open', 'blocked'].includes(status);
}

function defaultProgressState(status: string): CentralWorkflowProjection['progressState'] {
  switch (status) {
    case 'requested':
      return 'ready';
    case 'approved':
    case 'in_progress':
      return 'active';
    case 'completed':
      return 'completed';
    case 'rejected':
      return 'blocked';
    default:
      return 'queued';
  }
}

export async function postCentralWorkflowProjection(
  projection: CentralWorkflowProjection,
  env: CentralWorkflowEnv,
): Promise<void> {
  const token = env.CHITTY_CONTEXT_TOKEN || env.CHITTY_AUTH_SERVICE_TOKEN;
  if (!token) return;

  const now = new Date().toISOString();
  const base = resolveContextBase(env);
  const body = {
    event_type: 'workflow.projected',
    source_system: 'chittyfinance',
    idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`,
    occurred_at: now,
    actor_chitty_id: 'service:finance.chitty.cc',
    payload: {
      aggregateType: 'workflow',
      aggregateId: projection.aggregateId,

Script executed:

# Read the rest of central-workflows.ts to see the full POST implementation
tail -n +70 server/lib/central-workflows.ts | head -50

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

      aggregateId: projection.aggregateId,
      tenantId: projection.tenantId,
      title: projection.title,
      description: projection.description ?? null,
      workflowType: projection.workflowType,
      status: projection.status,
      claimable: projection.claimable ?? defaultClaimable(projection.status),
      lane: projection.lane ?? 'progress',
      progressState: projection.progressState ?? defaultProgressState(projection.status),
      parentRef: projection.parentRef ?? `tenant:${projection.tenantId}`,
      fractalKey: `${projection.tenantId}:${projection.workflowType}:${projection.aggregateId}`,
      metadata: projection.metadata ?? {},
    },
  };

  try {
    const res = await fetch(`${base}/v1/events`, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'X-Source-Service': 'finance.chitty.cc',
        Authorization: `Bearer ${token}`,
      },
      body: JSON.stringify(body),
      signal: AbortSignal.timeout(5000),
    });

    if (!res.ok) {
      console.warn(`[central-workflows] POST /v1/events returned ${res.status}`);
    }
  } catch (err) {
    console.warn('[central-workflows] projection failed:', err);
  }
}

export function centralWorkflowLog(
  c: { executionCtx: { waitUntil(p: Promise<unknown>): void } },
  projection: CentralWorkflowProjection,
  env: CentralWorkflowEnv,
): void {
  const promise = postCentralWorkflowProjection(projection, env);
  try {
    c.executionCtx.waitUntil(promise);
  } catch {
    // Test environment / non-Workers runtime.
  }
}

Script executed:

# Search for other workflow.projected or similar event posts
rg "workflow\.projected|event_type.*workflow|POST.*events" --type ts -A 3 -B 3

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

server/routes/google.ts-  }
server/routes/google.ts-});
server/routes/google.ts-
server/routes/google.ts:// POST /api/google/calendar/events — Create calendar event
server/routes/google.ts-googleRoutes.post('/api/google/calendar/events', async (c) => {
server/routes/google.ts-  const storage = c.get('storage');
server/routes/google.ts-  const tenantId = c.get('tenantId');
--
server/lib/central-workflows.ts-  const now = new Date().toISOString();
server/lib/central-workflows.ts-  const base = resolveContextBase(env);
server/lib/central-workflows.ts-  const body = {
server/lib/central-workflows.ts:    event_type: 'workflow.projected',
server/lib/central-workflows.ts-    source_system: 'chittyfinance',
server/lib/central-workflows.ts-    idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`,
server/lib/central-workflows.ts-    occurred_at: now,
--
server/lib/central-workflows.ts-    });
server/lib/central-workflows.ts-
server/lib/central-workflows.ts-    if (!res.ok) {
server/lib/central-workflows.ts:      console.warn(`[central-workflows] POST /v1/events returned ${res.status}`);
server/lib/central-workflows.ts-    }
server/lib/central-workflows.ts-  } catch (err) {
server/lib/central-workflows.ts-    console.warn('[central-workflows] projection failed:', err);
--
client/src/components/property/GoogleCalendarEmbed.tsx-    e.preventDefault();
client/src/components/property/GoogleCalendarEmbed.tsx-    setSubmitting(true);
client/src/components/property/GoogleCalendarEmbed.tsx-    try {
client/src/components/property/GoogleCalendarEmbed.tsx:      await apiRequest('POST', '/api/google/calendar/events', {
client/src/components/property/GoogleCalendarEmbed.tsx-        calendarId: calendarId || 'primary',
client/src/components/property/GoogleCalendarEmbed.tsx-        summary,
client/src/components/property/GoogleCalendarEmbed.tsx-        description,

Script executed:

# Look at workflows.ts to understand when projections are emitted
wc -l server/routes/workflows.ts
grep -n "postCentralWorkflowProjection\|projection" server/routes/workflows.ts | head -20

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

146 server/routes/workflows.ts

Script executed:

# Read the workflows.ts file to see where centralWorkflowLog is called
cat -n server/routes/workflows.ts | grep -A 5 -B 5 "centralWorkflowLog\|CentralWorkflowProjection"

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

     1	import { Hono } from 'hono';
     2	import type { HonoEnv } from '../env';
     3	import { centralWorkflowLog } from '../lib/central-workflows';
     4	
     5	export const workflowRoutes = new Hono<HonoEnv>();
     6	
     7	// GET /api/workflows — List workflows by property
     8	workflowRoutes.get('/api/workflows', async (c) => {
--
    30	    costEstimate: body.costEstimate,
    31	    status: 'requested',
    32	    metadata: body.metadata,
    33	  });
    34	
    35	  centralWorkflowLog(c, {
    36	    aggregateId: workflow.id,
    37	    tenantId,
    38	    workflowType: workflow.type,
    39	    title: workflow.title,
    40	    description: workflow.description,
--
    63	    metadata: body.metadata,
    64	  });
    65	
    66	  if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
    67	
    68	  centralWorkflowLog(c, {
    69	    aggregateId: workflow.id,
    70	    tenantId: workflow.tenantId,
    71	    workflowType: workflow.type,
    72	    title: workflow.title,
    73	    description: workflow.description,
--
    93	    metadata: { approvedAt: new Date().toISOString(), approvedBy: c.get('userId') },
    94	  });
    95	
    96	  if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
    97	
    98	  centralWorkflowLog(c, {
    99	    aggregateId: workflow.id,
   100	    tenantId: workflow.tenantId,
   101	    workflowType: workflow.type,
   102	    title: workflow.title,
   103	    description: workflow.description,
--
   124	    metadata: { completedAt: new Date().toISOString() },
   125	  });
   126	
   127	  if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
   128	
   129	  centralWorkflowLog(c, {
   130	    aggregateId: workflow.id,
   131	    tenantId: workflow.tenantId,
   132	    workflowType: workflow.type,
   133	    title: workflow.title,
   134	    description: workflow.description,

Script executed:

# Also check where the projection is created to understand the status field behavior
cat -n server/routes/workflows.ts | head -80

Repository: chittyapps/chittyfinance


Repository: chittyapps/chittyfinance
Exit code: 0

stdout:

     1	import { Hono } from 'hono';
     2	import type { HonoEnv } from '../env';
     3	import { centralWorkflowLog } from '../lib/central-workflows';
     4	
     5	export const workflowRoutes = new Hono<HonoEnv>();
     6	
     7	// GET /api/workflows — List workflows by property
     8	workflowRoutes.get('/api/workflows', async (c) => {
     9	  const storage = c.get('storage');
    10	  const tenantId = c.get('tenantId');
    11	  const propertyId = c.req.query('propertyId');
    12	
    13	  const workflows = await storage.getWorkflows(tenantId, propertyId ?? undefined);
    14	  return c.json(workflows);
    15	});
    16	
    17	// POST /api/workflows — Create workflow (maintenance request, expense approval, etc.)
    18	workflowRoutes.post('/api/workflows', async (c) => {
    19	  const storage = c.get('storage');
    20	  const tenantId = c.get('tenantId');
    21	  const body = await c.req.json();
    22	
    23	  const workflow = await storage.createWorkflow({
    24	    tenantId,
    25	    propertyId: body.propertyId,
    26	    type: body.type || 'maintenance_request',
    27	    title: body.title,
    28	    description: body.description,
    29	    requestor: body.requestor,
    30	    costEstimate: body.costEstimate,
    31	    status: 'requested',
    32	    metadata: body.metadata,
    33	  });
    34	
    35	  centralWorkflowLog(c, {
    36	    aggregateId: workflow.id,
    37	    tenantId,
    38	    workflowType: workflow.type,
    39	    title: workflow.title,
    40	    description: workflow.description,
    41	    status: workflow.status,
    42	    claimable: true,
    43	    lane: workflow.status === 'requested' ? 'approvals' : 'progress',
    44	    metadata: {
    45	      propertyId: workflow.propertyId,
    46	      requestor: workflow.requestor,
    47	      costEstimate: workflow.costEstimate,
    48	      localTable: 'workflows',
    49	    },
    50	  }, c.env);
    51	
    52	  return c.json(workflow, 201);
    53	});
    54	
    55	// PATCH /api/workflows/:id — Update workflow status
    56	workflowRoutes.patch('/api/workflows/:id', async (c) => {
    57	  const storage = c.get('storage');
    58	  const id = c.req.param('id');
    59	  const body = await c.req.json();
    60	
    61	  const workflow = await storage.updateWorkflow(id, {
    62	    status: body.status,
    63	    metadata: body.metadata,
    64	  });
    65	
    66	  if (!workflow) return c.json({ error: 'Workflow not found' }, 404);
    67	
    68	  centralWorkflowLog(c, {
    69	    aggregateId: workflow.id,
    70	    tenantId: workflow.tenantId,
    71	    workflowType: workflow.type,
    72	    title: workflow.title,
    73	    description: workflow.description,
    74	    status: workflow.status,
    75	    lane: workflow.status === 'requested' ? 'approvals' : 'progress',
    76	    metadata: {
    77	      propertyId: workflow.propertyId,
    78	      localTable: 'workflows',
    79	      syncSource: 'PATCH /api/workflows/:id',
    80	    },

workflow.projected is not in the documented central events contract and will fail silently.

The POST /v1/events contract documented in docs/plans/2026-03-03-central-sot-v1-hardening.md:72-100 restricts event types to session_*, checkpoint_saved, state_compacted, task_*, decision_recorded, artifact_linked, and sync_conflict_*. Posting workflow.projected will 4xx, but failures are only logged as warnings and the feature silently becomes a no-op since postCentralWorkflowProjection does not propagate errors.

Additionally, the idempotency key (built from aggregateId and status only) is too coarse: metadata-only PATCH updates with the same status will be deduplicated away, leaving stale projected state in the central system.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/lib/central-workflows.ts` around lines 63 - 99, The posted event_type
and error handling must be fixed: in postCentralWorkflowProjection update the
body.event_type to a contract-allowed type (use an allowed prefix such as
"task_projected" or map workflow.projected → an approved event name per the
central events contract) instead of "workflow.projected"; make the
idempotency_key more granular by including a stable metadata fingerprint or
projection.updatedAt (e.g.,
`workflow:${projection.aggregateId}:${projection.status}:${hash(projection.metadata)}`)
so metadata-only PATCHes aren’t deduped; and change the response handling (the
fetch block that checks res.ok) to throw or reject with a descriptive Error when
!res.ok (instead of only console.warn) so callers receive and can handle
failures.

Comment on lines +65 to +66
idempotency_key: `workflow:${projection.aggregateId}:${projection.status}`,
occurred_at: now,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "*.ts" | grep -E "(central-workflows|workflows)" | head -20

Repository: chittyapps/chittyfinance

Length of output: 130


🏁 Script executed:

cat -n server/lib/central-workflows.ts | head -100

Repository: chittyapps/chittyfinance

Length of output: 3984


🏁 Script executed:

cat -n server/routes/workflows.ts | head -150

Repository: chittyapps/chittyfinance

Length of output: 5331


🏁 Script executed:

cat -n server/storage.ts | grep -A 30 "updateWorkflow"

Repository: chittyapps/chittyfinance

Length of output: 117


🏁 Script executed:

rg "updateWorkflow" --type ts -B 2 -A 10 server/

Repository: chittyapps/chittyfinance

Length of output: 3038


🏁 Script executed:

cat -n server/storage/system.ts | sed -n '100,200p'

Repository: chittyapps/chittyfinance

Length of output: 3936


🏁 Script executed:

rg "title|description" server/routes/workflows.ts -B 2 -A 2

Repository: chittyapps/chittyfinance

Length of output: 1043


🏁 Script executed:

cat -n server/lib/central-workflows.ts | sed -n '60,85p'

Repository: chittyapps/chittyfinance

Length of output: 1306


🏁 Script executed:

rg "centralWorkflowLog" server/lib/central-workflows.ts -B 5 -A 15

Repository: chittyapps/chittyfinance

Length of output: 514


Include more change information in the idempotency key to avoid colliding metadata updates.

The current key only varies by aggregateId and status. Since PATCH /api/workflows/:id can update metadata while keeping the same status, multiple updates with different metadata produce identical idempotency keys. Under the upstream (source_system, idempotency_key) uniqueness constraint, subsequent metadata-only updates will be deduplicated and dropped, leaving the central state stale.

Consider including a hash of the changed fields or metadata in the key, or using a combination of status and a timestamp/occurrence to ensure each distinct change gets its own deduplication window.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/lib/central-workflows.ts` around lines 65 - 66, The idempotency_key
currently built as `workflow:${projection.aggregateId}:${projection.status}` is
too coarse and causes metadata-only PATCHes to collide; update the code that
sets idempotency_key to include a deterministic fingerprint of the change (e.g.,
a hash of projection.metadata or the changed fields) or append a high-resolution
occurrence/timestamp to guarantee uniqueness for distinct updates; locate where
idempotency_key is constructed (the idempotency_key assignment using
projection.aggregateId and projection.status) and replace it with a string that
combines those existing parts plus either a metadata hash or occurrence value so
metadata-only updates aren't deduplicated.

Comment thread server/routes/import.ts
Comment on lines +247 to +277
for (const line of dataLines) {
const fields = parseCSVLine(line);
if (fields.length < 16) continue;

const parseNum = (s: string) => {
const cleaned = s.replace(/[$,]/g, '').trim();
return cleaned ? parseFloat(cleaned) : 0;
};

rows.push({
date: fields[0],
storeNumber: fields[1],
transactionId: fields[2],
registerNumber: fields[3],
jobName: fields[4],
skuNumber: fields[5],
skuDescription: fields[6],
quantity: parseNum(fields[7]) || 1,
unitPrice: parseNum(fields[8]),
departmentName: fields[9] || '',
className: fields[10] || '',
subclassName: fields[11] || '',
programDiscountAmount: parseNum(fields[12]),
otherDiscountAmount: parseNum(fields[14]),
extendedRetail: parseNum(fields[15]),
netUnitPrice: parseNum(fields[16]),
internetSku: fields[17] || '',
purchaser: fields[18] || '',
orderNumber: fields[19] || '',
invoiceNumber: fields[20] || '',
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Handle short HD Pro rows before numeric parsing.

The guard at Line 249 only requires 16 columns, but Lines 269-276 read up through fields[20] and call parseNum(fields[16]). A row with missing trailing columns will throw on s.replace(...) and abort the whole import instead of being skipped.

Suggested fix
-    if (fields.length < 16) continue;
+    if (fields.length < 21) continue;

-    const parseNum = (s: string) => {
-      const cleaned = s.replace(/[$,]/g, '').trim();
+    const parseNum = (s?: string) => {
+      const cleaned = (s ?? '').replace(/[$,]/g, '').trim();
       return cleaned ? parseFloat(cleaned) : 0;
     };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes/import.ts` around lines 247 - 277, The current loop uses
parseNum(fields[x]) and accesses up to fields[20] even though it only guards for
fields.length < 16; this will throw when trailing columns are missing. Fix by
either expanding the row-length guard to require at least 21 columns before
pushing (e.g., check fields.length >= 21) or make parseNum robust to
undefined/null (handle non-string inputs by treating them as empty and returning
0) and then keep the existing guard; update references in this block (parseNum,
rows.push, fields, parseCSVLine) accordingly so missing trailing columns no
longer cause exceptions during import.

Comment thread server/routes/import.ts
Comment on lines +512 to +567
for (const row of parsed) {
// Skip zero-amount or return-only rows
const amount = row.netUnitPrice || row.extendedRetail || row.unitPrice;
if (!amount && amount !== 0) continue;

const externalId = await hdproDeduplicationHash(row.transactionId, row.skuNumber, row.date);
const existing = await storage.getTransactionByExternalId(externalId, tenantId);
if (existing) {
skipped++;
continue;
}

const entityKey = normalizeHDJobName(row.jobName);
const classification = classifyHDProItem(row.departmentName, row.className, row.subclassName, row.skuDescription);
const isSuspense = entityKey === 'suspense' || classification.code === '9010';
if (isSuspense) suspenseCount++;

// Track job name stats for the response
const rawJob = row.jobName.trim() || '(blank)';
if (!jobNameStats[rawJob]) jobNameStats[rawJob] = { count: 0, entity: entityKey };
jobNameStats[rawJob].count++;

// Resolve property if applicable
const prop = propertyMap[entityKey];

try {
await storage.createTransaction({
tenantId,
accountId,
amount: String(Math.abs(amount)),
type: amount >= 0 ? 'expense' : 'income', // HD Pro purchases are expenses; negative = returns
category: row.departmentName || 'home_depot',
description: `HD Pro: ${row.skuDescription}`,
date: new Date(row.date),
payee: 'Home Depot',
propertyId: prop?.id || null,
externalId,
suggestedCoaCode: classification.code,
classificationConfidence: String(isSuspense ? 0.100 : classification.confidence),
metadata: {
source: 'hdpro',
storeNumber: row.storeNumber,
transactionId: row.transactionId,
skuNumber: row.skuNumber,
department: row.departmentName,
class: row.className,
subclass: row.subclassName,
quantity: row.quantity,
unitPrice: row.unitPrice,
discounts: row.programDiscountAmount + row.otherDiscountAmount,
purchaser: row.purchaser,
invoiceNumber: row.invoiceNumber,
jobName: row.jobName,
normalizedEntity: entityKey,
},
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This 3K-row import path is still fully serialized.

Inside the loop each row does one getTransactionByExternalId(...) and usually one createTransaction(...), both awaited sequentially. For the 3K+ HD Pro exports called out in the PR, that becomes thousands of network round-trips on a single request and is likely to hit Workers/Neon latency limits. Please batch duplicate detection and insert work in chunks instead of processing the entire file row-by-row.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes/import.ts` around lines 512 - 567, The loop over parsed rows is
doing per-row network calls (hdproDeduplicationHash ->
storage.getTransactionByExternalId and storage.createTransaction) which
serializes thousands of round trips; refactor to process rows in chunks (e.g.,
100-500) by (1) mapping the chunk to externalIds via hdproDeduplicationHash, (2)
calling a batch lookup like storage.getTransactionsByExternalIds(externalIds)
(add this method if missing) to find existing externalIds in one call, (3)
filter out duplicates in the chunk and prepare create payloads, and (4) perform
bulk inserts or parallelize createTransaction calls for that filtered chunk
using Promise.all / Promise.allSettled (or add storage.createTransactions batch
API) while still updating jobNameStats, suspenseCount, and propertyMap lookups
(entityKey, classification via classifyHDProItem) before the batch insert;
replace per-row awaits of storage.getTransactionByExternalId and
storage.createTransaction with this chunked flow referencing
hdproDeduplicationHash, storage.getTransactionByExternalId/createTransaction,
classifyHDProItem, normalizeHDJobName, jobNameStats, and propertyMap.

Comment thread server/routes/import.ts
Comment on lines +588 to +595
return c.json({
parsed: parsed.length,
imported,
skipped,
suspenseCount,
errors: errors.slice(0, 50),
jobNameMapping: jobNameStats,
}, imported > 0 ? 200 : 422);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Duplicate-only reimports should not return 422.

When dedup works and every row is skipped, imported stays 0, so this responds with 422 even though the import succeeded as a no-op. That makes idempotent re-uploads look like failures and breaks the dedup verification flow in the PR.

Suggested fix
-  return c.json({
+  const status = imported > 0 || skipped > 0 ? 200 : 422;
+  return c.json({
     parsed: parsed.length,
     imported,
     skipped,
     suspenseCount,
     errors: errors.slice(0, 50),
     jobNameMapping: jobNameStats,
-  }, imported > 0 ? 200 : 422);
+  }, status);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return c.json({
parsed: parsed.length,
imported,
skipped,
suspenseCount,
errors: errors.slice(0, 50),
jobNameMapping: jobNameStats,
}, imported > 0 ? 200 : 422);
const status = imported > 0 || skipped > 0 ? 200 : 422;
return c.json({
parsed: parsed.length,
imported,
skipped,
suspenseCount,
errors: errors.slice(0, 50),
jobNameMapping: jobNameStats,
}, status);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/routes/import.ts` around lines 588 - 595, The response currently
returns 422 when imported === 0 even for successful dedupe-only imports; update
the status logic in the JSON response (the return c.json({...}, imported > 0 ?
200 : 422) block) to return 200 not only when imported > 0 but also when no real
errors occurred and rows were skipped (e.g. imported === 0 && skipped > 0 &&
errors.length === 0). Ensure you reference the variables imported, skipped and
errors to decide the status code so idempotent re-uploads return 200.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants