diff --git a/.agents/skills/automatic-code-review/SKILL.md b/.agents/skills/automatic-code-review/SKILL.md new file mode 100644 index 0000000..3ca2453 --- /dev/null +++ b/.agents/skills/automatic-code-review/SKILL.md @@ -0,0 +1,106 @@ +--- +name: automatic-code-review +description: How to conduct and receive effective code reviews. Use when reviewing a PR, requesting review, or improving review quality. +authors: + - Automatic +--- + +# Code Review + +Code review is a quality gate and a knowledge-sharing mechanism. Its purpose is to catch defects, surface design problems, and spread understanding — not to enforce style preferences or demonstrate superiority. + +## Mindset + +Review the code, not the person. Every comment should serve the goal of shipping better software. Assume good intent; ask questions before making accusations. + +--- + +## What to Look For + +### Correctness (highest priority) +- Does the code do what it claims to do? +- Are there edge cases that are not handled? (null, empty, overflow, concurrency) +- Are error states handled explicitly, or silently swallowed? +- Does it match the requirements or specification? + +### Security +- Is user input validated and sanitised before use? +- Are secrets or credentials ever logged or exposed? +- Are permissions and authorisation checks in the right place? +- See the `security-review` skill for a full checklist + +### Design +- Does the change fit the existing architecture, or does it introduce inconsistency? +- Is responsibility correctly assigned — does each function/class do one thing? +- Are dependencies pointing in the right direction? +- Would a future maintainer understand this without asking the author? + +### Testability and tests +- Is the new code covered by tests? +- Do the tests verify behaviour, not implementation? +- Would a failing test give a clear signal about what broke? + +### Performance +- Are there obvious algorithmic issues (N+1 queries, O(n²) in a hot path)? +- Are expensive operations cached where appropriate? +- For critical paths, is there evidence of measurement rather than assumption? + +### Maintainability +- Are names clear and specific? +- Is duplication introduced that could be extracted? +- Are there comments that explain *why*, not just *what*? +- Is dead code removed? + +--- + +## How to Comment + +**Be specific.** Vague comments waste everyone's time. + +``` +// Bad: "This could be better" +// Good: "This will execute a database query for every item in the list. +// Consider fetching all items in a single query before the loop." +``` + +**Label the severity.** Not all comments are blockers. + +- `[blocking]` — must be resolved before merge; correctness or security issue +- `[suggestion]` — improvement worth discussing; not required +- `[nit]` — minor style or preference; take it or leave it +- `[question]` — asking for understanding, not requesting a change + +**Suggest, do not just criticise.** If you identify a problem, offer a direction for fixing it. + +--- + +## Scope + +Review what was changed. Do not block a PR because of pre-existing issues in surrounding code — create a separate ticket for those. Do not add scope to a PR during review. + +--- + +## For Authors + +Before requesting review: + +- [ ] Self-review the diff first — catch your own obvious mistakes +- [ ] The PR description explains *why*, not just *what* changed +- [ ] Tests are included and passing +- [ ] No debug code, commented-out blocks, or TODOs left in without a ticket reference +- [ ] The change is as small as possible — large PRs produce low-quality reviews + +When receiving feedback: + +- Respond to every comment — either with a change or a rationale for not changing +- Do not silently resolve comments you disagree with; discuss them +- `[nit]` comments are optional — you do not need to justify ignoring them + +--- + +## What Code Review Is Not + +- A style guide enforcement tool (use a linter for that) +- A place to redesign the entire system +- A gatekeeping exercise +- A substitute for automated testing diff --git a/.agents/skills/automatic-debugging/SKILL.md b/.agents/skills/automatic-debugging/SKILL.md new file mode 100644 index 0000000..2dec565 --- /dev/null +++ b/.agents/skills/automatic-debugging/SKILL.md @@ -0,0 +1,92 @@ +--- +name: automatic-debugging +description: Systematic process for diagnosing and resolving defects. Use when debugging failures, investigating errors, or reproducing issues. +authors: + - Automatic +--- + +# Systematic Debugging + +A disciplined process for diagnosing and resolving defects. Guessing wastes time. Systematic investigation finds root causes. + +## The Process + +### 1. Reproduce first +Before touching code, confirm you can reproduce the failure consistently. A defect you cannot reproduce reliably cannot be safely fixed. + +- Identify the exact inputs, environment, and sequence that trigger the issue +- Determine if it is deterministic or intermittent +- Record the actual vs. expected behaviour precisely + +### 2. Understand before fixing +Read the error message and stack trace fully. Do not skip lines. The first error is usually the cause; later errors are often consequences. + +- Locate the exact file, line, and call that failed +- Read the surrounding code — not just the failing line +- Check recent changes: `git log --oneline -20`, `git diff HEAD~5` + +### 3. Form a hypothesis +State a specific, testable hypothesis: *"I believe X is happening because Y."* Do not proceed without one. + +- Hypotheses must be falsifiable — if you cannot test it, it is not a hypothesis +- Start with the simplest explanation consistent with the evidence +- Avoid hypothesising about multiple unrelated causes at once + +### 4. Gather evidence +Prove or disprove the hypothesis with minimal, targeted changes. + +```bash +# Add temporary logging at the point of failure +# Check actual values, not assumed values +# Use the debugger — step through, don't assume execution flow +``` + +- Add logging that reveals state, not just "got here" messages +- Use the smallest possible test case that reproduces the issue +- Check: inputs going in, outputs coming out, state at failure point + +### 5. Fix the cause, not the symptom +Once the root cause is confirmed, fix it directly. + +- Removing an assertion to stop a test failing is not a fix +- Adding a `null` guard around a crash is not a fix if the null should never occur +- If the fix feels like a workaround, it probably is — keep investigating + +### 6. Verify and prevent regression +- Confirm the fix resolves the original reproduction case +- Add a test that would have caught this defect +- Consider whether the same class of bug exists elsewhere + +--- + +## Common Patterns + +### Intermittent failures +- Likely causes: race conditions, timing dependencies, uninitialized state, external service variability +- Strategy: add logging to capture state at the moment of failure; increase test runs; check for shared mutable state + +### "It works on my machine" +- Likely causes: environment differences (OS, language version, dependencies, env vars, file paths, timezone) +- Strategy: diff the environments explicitly; check `.env`, lock files, system dependencies; reproduce in a container + +### Regression (worked before, broken now) +- Start with `git bisect` to isolate the breaking commit +- Read that commit's diff with fresh eyes + +### Null / undefined errors +- Find where the value is set, not where it is read +- Ask: should this value ever be null? If not, find why it is + +### Performance degradation +- Measure before you optimise — identify the actual bottleneck with profiling data, not intuition +- See the `performance-profiling` skill + +--- + +## What Not to Do + +- **Do not comment out failing code** to make tests pass +- **Do not add `sleep` or retry loops** to hide timing issues +- **Do not ignore warnings** — they are often early indicators of the real problem +- **Do not fix multiple things at once** — you will not know which change resolved the issue +- **Do not assume** — verify every assumption with evidence diff --git a/.agents/skills/automatic-documentation/SKILL.md b/.agents/skills/automatic-documentation/SKILL.md new file mode 100644 index 0000000..045a738 --- /dev/null +++ b/.agents/skills/automatic-documentation/SKILL.md @@ -0,0 +1,108 @@ +--- +name: automatic-documentation +description: Principles for writing READMEs, API docs, ADRs, code comments, and changelogs. Use when creating or improving any technical documentation. +authors: + - Automatic +--- + +# Technical Documentation + +Documentation is a product. It requires the same deliberateness as code. Outdated, incomplete, or misleading documentation is worse than none — it wastes time and creates false confidence. + +## Principles + +**Write for the reader, not the writer.** The author already knows how the system works. Documentation exists to transfer that understanding to someone who does not. Every sentence should serve that goal. + +**Document decisions, not just descriptions.** Code describes what the system does. Documentation should explain *why* it does it that way. The most valuable documentation captures constraints, trade-offs, and rejected alternatives. + +**Treat documentation as a deliverable.** A feature is not complete until it is documented. Documentation that is written later rarely gets written. + +--- + +## Types of Documentation + +### README +The entry point for any project. Must answer: +- What does this project do? +- How do I get it running locally? +- How do I run the tests? +- Where do I go for more information? + +Keep it short. Link to deeper documentation. It should take under 5 minutes to read. + +### API documentation +Document every public interface: functions, REST endpoints, event schemas. + +For each item, include: +- **Purpose** — what it does in one sentence +- **Parameters / fields** — name, type, whether required, valid values or range, default +- **Return value / response** — what is returned and under what conditions +- **Errors** — what error conditions can occur and what they mean +- **Example** — a concrete, working example + +### Architecture decision records (ADRs) +Record significant technical decisions as short, dated documents: + +```markdown +# ADR-042: Use PostgreSQL for primary data store + +**Status**: Accepted +**Date**: 2025-03-02 + +## Context +We needed a relational database that supports ... + +## Decision +We will use PostgreSQL 16 because ... + +## Consequences +- We gain: JSONB support, strong ACID guarantees, mature tooling +- We accept: operational complexity vs. a managed service +- We reject: MySQL (weaker JSON support), SQLite (no concurrent writes) +``` + +ADRs are permanent. Even superseded decisions should be marked "Superseded" and kept — the reasoning matters. + +### Code comments +Comment *why*, not *what*. The code says what is happening; comments explain why it happens that way. + +``` +// Good: "Skip validation here — this path is only reachable from the +// internal job queue, which has already validated the payload." + +// Bad: "Skip validation" +// Bad: "Call the validate function" (the next line says that) +``` + +Comment anything surprising, non-obvious, or that required a decision that is not evident from the code itself. + +### Changelogs +Record what changed for users, not for developers. + +- Group by release version and date +- Categorise: Added, Changed, Deprecated, Removed, Fixed, Security +- Link to issues or PRs for context +- Never use "Various bug fixes" — be specific about what was fixed + +--- + +## Writing Style + +**Use plain English.** Avoid jargon where a simpler word exists. If jargon is necessary, define it on first use. + +**Short sentences.** Break long sentences into two. If a sentence requires re-reading, rewrite it. + +**Active voice.** "The server validates the token" not "The token is validated by the server." + +**Present tense.** "Returns the user object" not "Will return the user object." + +**Examples are mandatory.** Abstract descriptions without examples make readers do extra work. Show, then tell. + +--- + +## Maintenance + +- Documentation lives alongside the code it describes — in the same repository +- Documentation changes are part of the same PR as the code changes +- Broken documentation is a bug — file it and fix it +- Deprecate documented features explicitly; do not silently remove documentation diff --git a/.agents/skills/automatic-features/SKILL.md b/.agents/skills/automatic-features/SKILL.md new file mode 100644 index 0000000..1de9164 --- /dev/null +++ b/.agents/skills/automatic-features/SKILL.md @@ -0,0 +1,236 @@ +--- +name: automatic-features +description: How to use Automatic's feature tracking system — creating, progressing, and updating per-project work items via the Automatic MCP service. Activate when working on a project managed by Automatic that has feature tracking enabled. +authors: + - Automatic +--- + +# Automatic Features — Agent Guide + +Automatic provides per-project **feature tracking**: a structured backlog of work items visible to the user in the Automatic UI and readable/writable by agents via MCP tools. + +Features are the shared source of truth for what needs to be built. Users create and prioritise features; agents pick them up, progress them, and log updates as work proceeds. + +## Feature Model + +Each feature has: + +| Field | Values | +|---|---| +| `id` | UUID (returned on create — store it for subsequent calls) | +| `title` | Short description of the work | +| `description` | Full markdown specification | +| `state` | `backlog` · `todo` · `in_progress` · `review` · `complete` | +| `priority` | `low` · `medium` · `high` | +| `effort` | `xs` · `s` · `m` · `l` · `xl` | +| `assignee` | Agent id or name (free text) | +| `tags` | List of strings for filtering | +| `linked_files` | File paths in the project this feature relates to | +| `updates` | Append-only log of markdown progress notes | + +## MCP Tools + +All tools require a `project` parameter matching the project name registered in Automatic. Use `automatic_list_projects` if you are unsure of the correct name. + +--- + +### `automatic_list_features` + +List all features for a project, grouped by state. + +``` +project: string — project name +state: string (opt) — filter to one state: backlog | todo | in_progress | review | complete +``` + +Returns titles, IDs, priorities, effort, and assignees. **Always call this at session start** with `state: "todo"` to see what is planned for you. + +--- + +### `automatic_get_feature` + +Get full detail for a single feature, including its description and complete update history. + +``` +project: string — project name +feature_id: string — UUID from list_features +``` + +Read this before starting work on a feature so you understand the full specification and any prior progress notes. + +--- + +### `automatic_create_feature` + +Create a new feature in the project backlog. + +``` +project: string — project name (required) +title: string — short title (required) +description: string (opt) — markdown specification +priority: string (opt) — low | medium | high (default: medium) +assignee: string (opt) — agent id or name +tags: string[] (opt) — searchable labels +linked_files: string[] (opt) — relevant file paths +effort: string (opt) — xs | s | m | l | xl +created_by: string (opt) — identifies which agent created it +``` + +Returns the created feature including its `id`. **Save the id** — you will need it for all subsequent calls. + +**When to use:** When you discover work during implementation that is not yet tracked. Capture it rather than doing it silently. + +--- + +### `automatic_update_feature` + +Update a feature's metadata fields. Omit any field to leave it unchanged. + +``` +project: string — project name +feature_id: string — UUID +title: string (opt) +description: string (opt) +priority: string (opt) +assignee: string (opt) +tags: string[] (opt) +linked_files: string[] (opt) +effort: string (opt) +``` + +**When to use:** To refine the description after investigating the codebase, or to correct priority/effort estimates. + +--- + +### `automatic_set_feature_state` + +Transition a feature to a new lifecycle state. + +``` +project: string — project name +feature_id: string — UUID +state: string — backlog | todo | in_progress | review | complete +``` + +**When to use:** Call this at the key transition points in your workflow (see below). Do not skip states — move through them in order so the user can follow progress in the UI. + +--- + +### `automatic_add_feature_update` + +Append a markdown progress note to a feature's update log. + +``` +project: string — project name +feature_id: string — UUID +content: string — markdown text (required) +author: string (opt) — agent id or name +``` + +Updates are append-only and timestamped. They appear in the Automatic UI so the user can follow what you are doing without asking. + +**When to use:** After each significant unit of work — a decision made, a blocker found, a sub-task completed. Be specific. Bad: *"Made progress."* Good: *"Implemented JWT validation in `src/auth/middleware.ts`. Chose HS256 over RS256 because no external verifier is needed."* + +--- + +### `automatic_delete_feature` + +Permanently delete a feature and all its updates. + +``` +project: string — project name +feature_id: string — UUID +``` + +Use sparingly. Prefer moving a feature to `backlog` over deleting it. + +--- + +## Standard Agent Workflow + +Follow this sequence for every feature-driven session: + +### 1. Orient + +``` +automatic_list_features(project: "my-project", state: "todo") +``` + +Identify the highest-priority feature to work on. If nothing is in `todo`, check `backlog` and ask the user which to start. + +### 2. Read the specification + +``` +automatic_get_feature(project: "my-project", feature_id: "") +``` + +Read the full description and all prior updates. Do not start work until you understand the full scope. + +### 3. Claim it + +``` +automatic_set_feature_state(project: "my-project", feature_id: "", state: "in_progress") +automatic_update_feature(project: "my-project", feature_id: "", assignee: "claude-code") +``` + +Move to `in_progress` and set `assignee` to your agent identity before touching any code. This signals to the user (and other agents) that the work is active and who is responsible for it. Use the same identifier consistently across all calls in the session (e.g. `"claude-code"`, `"cursor"`, `"gpt-4o"`). + +### 4. Work and log + +As you work, append updates after each meaningful step: + +``` +automatic_add_feature_update( + project: "my-project", + feature_id: "", + content: "Investigated the auth module. The existing `TokenService` at `src/services/token.ts` handles issuance but not validation. Will extend it rather than creating a new class.", + author: "claude-code" +) +``` + +Log at minimum: +- After investigating the codebase (what you found, what approach you chose) +- When you hit a blocker +- After completing a significant sub-task +- Before any major decision point + +### 5. Request review + +``` +automatic_set_feature_state(project: "my-project", feature_id: "", state: "review") +automatic_add_feature_update( + project: "my-project", + feature_id: "", + content: "Implementation complete. Changes: `src/auth/middleware.ts` (new validation), `src/routes/api.ts` (middleware applied), `tests/auth.test.ts` (6 new tests, all passing). Ready for review.", + author: "claude-code" +) +``` + +Always move to `review`, never to `complete`. The user marks features complete after they verify the work. + +### 6. Capture discovered work + +If you find additional work that was not part of the original feature: + +``` +automatic_create_feature( + project: "my-project", + title: "Refresh token expiry not enforced", + description: "Found during auth implementation. Refresh tokens are issued without an expiry check on use...", + priority: "high", + created_by: "claude-code" +) +``` + +Do not silently fix unscoped work. Create a feature for it so the user is aware. + +--- + +## Rules + +- **Always set assignee** — when claiming a feature, immediately call `automatic_update_feature` with your agent identity in `assignee`. Never leave a feature `in_progress` without an assignee. +- **Never mark a feature `complete`** — that is the user's call after reviewing your work. +- **Always log updates** — the user should be able to read the update history and understand exactly what you did and why, without asking. +- **One feature at a time** — move a feature to `in_progress` before starting it. Do not have multiple features `in_progress` simultaneously. +- **Read before starting** — always call `automatic_get_feature` before beginning work so you have the full specification and prior context. +- **Capture, don't silently fix** — if you discover unscoped work, create a backlog feature for it rather than doing it without the user's knowledge. diff --git a/.agents/skills/automatic-performance/SKILL.md b/.agents/skills/automatic-performance/SKILL.md new file mode 100644 index 0000000..94e1966 --- /dev/null +++ b/.agents/skills/automatic-performance/SKILL.md @@ -0,0 +1,104 @@ +--- +name: automatic-performance +description: Data-driven approach to identifying and resolving performance bottlenecks. Use when investigating slow queries, high latency, or memory growth. +authors: + - Automatic +--- + +# Performance Profiling + +Optimise only what you measure. Intuition about performance bottlenecks is frequently wrong. Every performance change must be preceded by data that identifies the actual constraint. + +## The Process + +### 1. Establish a baseline +Before changing anything, measure the current performance under realistic conditions. + +- Use production-representative data volumes and request patterns +- Record the metric you care about: p50, p95, p99 latency; throughput; memory usage; CPU time +- Run measurements multiple times and take medians — single measurements are noisy + +### 2. Profile, do not guess +Use a profiler to identify where time is actually spent. The results will surprise you. + +**CPU profiling** reveals which functions consume the most CPU time. Look for: +- Hot functions called far more often than expected +- Algorithmic complexity problems (O(n²) loops, exponential recursion) +- Unnecessary serialisation or deserialisation in hot paths + +**Memory profiling** reveals allocation patterns. Look for: +- Objects allocated and immediately discarded in tight loops +- Memory that grows without bound (leaks) +- Large allocations that could be pooled or reused + +**I/O and query profiling** — for most web applications, the bottleneck is database or network, not CPU. Look for: +- N+1 query patterns (one query per item in a loop) +- Missing indexes on frequently filtered or sorted columns +- Unbounded queries (no `LIMIT`, fetching far more data than needed) +- Synchronous blocking calls that could be concurrent + +### 3. Fix one thing at a time +Change one thing, re-measure, and compare against the baseline. If you change multiple things simultaneously, you cannot attribute the improvement. + +### 4. Verify the improvement +The improvement must be measurable in the same metric you established in step 1. If you cannot measure it, the optimisation did not work. + +--- + +## Common Bottlenecks + +### N+1 queries +Symptom: query count scales linearly with the number of records processed. + +``` +// N+1: one query to get orders, then one query per order to get the user +orders = db.query("SELECT * FROM orders") +for order in orders: + user = db.query("SELECT * FROM users WHERE id = ?", order.user_id) +``` + +Fix: fetch related data in a single join, or use batch loading. + +### Missing database indexes +Symptom: full table scans on large tables; slow queries on filtered or sorted columns. + +Diagnosis: run `EXPLAIN` on slow queries; look for sequential scans on large tables. + +Fix: add indexes on columns used in `WHERE`, `ORDER BY`, and `JOIN` conditions. Note that indexes have a write cost — add them deliberately. + +### Synchronous I/O in hot paths +Symptom: high latency, low CPU utilisation — the process is waiting. + +Fix: use async I/O; process independent requests concurrently; cache results of expensive I/O. + +### Unnecessary serialisation +Symptom: high CPU time in JSON encode/decode, XML parsing, or protocol buffer serialisation. + +Fix: cache serialised representations; reduce the size of serialised payloads; avoid serialising in tight loops. + +### Unbounded memory growth +Symptom: memory usage grows linearly with time or request count without stabilising. + +Fix: identify what is accumulating and why it is not being released. Common causes: event listeners not removed, caches without eviction policies, accumulating state in long-lived processes. + +--- + +## Caching + +Cache at the right level: +- **In-process** — fastest; lost on restart; not shared across instances +- **Distributed** (Redis, Memcached) — shared across instances; adds network latency +- **HTTP** (CDN, browser) — most scalable; only for public, cacheable responses + +Cache invalidation must be explicit. Stale data is a correctness problem, not just a performance one. Define TTLs deliberately; shorter TTLs are safer. + +Only cache what you have measured to be worth caching. Caches add complexity; they are not free. + +--- + +## What Not to Do + +- **Do not optimise prematurely** — write correct, clear code first; profile when there is a measured problem +- **Do not micro-optimise** — shaving 10ns from a function called 100 times/second saves 1µs/s; it is not worth the complexity +- **Do not cache without measuring** — if the operation is not a bottleneck, the cache buys nothing +- **Do not guess** — form a hypothesis, measure, act on evidence diff --git a/.agents/skills/automatic-refactoring/SKILL.md b/.agents/skills/automatic-refactoring/SKILL.md new file mode 100644 index 0000000..db90256 --- /dev/null +++ b/.agents/skills/automatic-refactoring/SKILL.md @@ -0,0 +1,108 @@ +--- +name: automatic-refactoring +description: Techniques for improving code structure without changing behaviour. Use when cleaning up code, reducing complexity, or addressing technical debt. +authors: + - Automatic +--- + +# Refactoring + +Refactoring is the process of changing code structure without changing its observable behaviour. The goal is to make the code easier to understand and modify. Refactoring is not rewriting, and it is not adding features. + +## The Rule + +**Never refactor and change behaviour at the same time.** If you are changing what code does and how it is structured in the same commit, you cannot tell which change introduced a regression. Do one thing at a time. + +--- + +## When to Refactor + +Refactor when: +- You are about to add a feature and the existing structure makes it harder than it should be +- You have just fixed a bug and the code that caused it is unclear +- Code review reveals that a change is hard to understand +- You are working in an area and find it difficult to follow + +Do not refactor when: +- There are no tests — add tests first, or you cannot verify you preserved behaviour +- You are under time pressure to ship a fix — stabilise first, improve later +- You do not understand what the code does — read it first + +--- + +## Recognising the Need + +### Code smells that warrant refactoring + +**Long function** — a function that does more than one thing. Extract the parts into named functions. + +**Long parameter list** — more than 3–4 parameters usually means the function is doing too much, or the parameters belong to an object. + +**Duplicated code** — the same logic appearing in two or more places. Extract and reuse. + +**Deep nesting** — more than 2–3 levels of indentation signals branching complexity. Use early returns, extract conditions, or restructure. + +**Inconsistent naming** — variables or functions named differently for the same concept. Pick one name and use it everywhere. + +**Magic numbers and strings** — unexplained literals. Extract to named constants. + +**Large class** — a class that has too many responsibilities. Split into focused classes. + +**Feature envy** — a function that uses more data from another class than its own. Move it. + +--- + +## Common Refactoring Moves + +### Extract function +Take a block of code with a clear purpose and move it into a named function. + +``` +// Before: comment explains what the next 10 lines do +// After: a function whose name replaces the comment +``` + +### Inline function +When a function's body is as clear as its name, remove the indirection. + +### Rename +Rename when the name does not accurately describe what the thing is or does. Good names eliminate the need for comments. + +### Extract variable +Replace a complex expression with a named variable that explains what it represents. + +### Replace condition with early return +Invert nested conditions to reduce indentation. + +``` +// Before: if (valid) { ... long block ... } +// After: if (!valid) return; ... logic at top level ... +``` + +### Replace magic number with constant +``` +// Before: if (retries > 3) +// After: if (retries > MAX_RETRY_ATTEMPTS) +``` + +### Move function or data +When a function uses more context from another module than its own, move it there. + +--- + +## Process + +1. **Verify tests pass** before starting +2. **Make one change at a time** — the smallest possible structural move +3. **Run tests after each change** — confirm behaviour is preserved +4. **Commit each change separately** — keeps the history readable and reversible +5. **Stop when the code is clear enough** for the next task — do not over-engineer + +--- + +## What Refactoring Is Not + +- **Rewriting** — if you are replacing the algorithm, that is a behaviour change +- **Adding features** — if you are adding new capability while restructuring, separate them +- **Optimising** — performance changes alter observable behaviour (timing); measure separately +- **Cosmetic formatting** — use an auto-formatter for that; do not mix with structural changes diff --git a/.agents/skills/automatic-security-review/SKILL.md b/.agents/skills/automatic-security-review/SKILL.md new file mode 100644 index 0000000..2a81165 --- /dev/null +++ b/.agents/skills/automatic-security-review/SKILL.md @@ -0,0 +1,108 @@ +--- +name: automatic-security-review +description: Security review checklist and threat mindset for any codebase. Use when reviewing code for vulnerabilities, implementing auth, or handling user input. +authors: + - Automatic +--- + +# Security Review + +Security defects are expensive to fix after deployment and can cause irreversible harm. Review for security at every stage — during design, code review, and before release. + +## Threat Mindset + +When reviewing code, ask: *what happens if an attacker controls this input?* Assume all external data is hostile until proven otherwise. External data includes: HTTP request parameters, headers, cookies, file uploads, database values, environment variables, inter-service messages, and anything read from the network or file system. + +--- + +## Input Validation + +- **Validate at the boundary** — check and sanitise all external input as soon as it enters the system +- **Allowlist, not blocklist** — define what is permitted; reject everything else +- **Validate type, length, format, and range** — all four, not just one +- **Never trust client-side validation alone** — it can be bypassed; replicate checks server-side + +--- + +## Injection + +Injection vulnerabilities occur when untrusted data is interpreted as code. + +**SQL injection** +- Use parameterised queries or prepared statements; never concatenate user input into SQL +- ORMs reduce risk but do not eliminate it — watch for raw query escape hatches + +**Command injection** +- Avoid passing user data to shell commands +- If unavoidable, use argument arrays (not string interpolation) and allowlist permitted values + +**Template injection** +- Do not render user-supplied strings as templates +- Escape output in templates; separate data from structure + +**Path traversal** +- Resolve and validate file paths against an allowlisted base directory before use +- Reject paths containing `..` or absolute path components + +--- + +## Authentication and Authorisation + +- **Authenticate** — verify *who* is making the request +- **Authorise** — verify they are *permitted* to perform the action on *that specific resource* +- Both checks must happen on the server side on every request +- Check authorisation at the resource level, not just the route level (e.g. user A must not be able to access user B's record by guessing an ID) +- Prefer short-lived tokens; rotate secrets on compromise + +--- + +## Secrets Management + +- **Never hardcode secrets** — no API keys, passwords, or tokens in source code +- **Never log secrets** — check that logging statements do not capture auth headers, tokens, or credentials +- **Use environment variables or a secrets manager** — not config files committed to the repo +- **Rotate secrets regularly** and immediately on suspected exposure +- **Audit `.gitignore`** — ensure `.env` and credential files are excluded + +--- + +## Cryptography + +- Do not implement your own cryptography — use established, audited libraries +- Use current recommended algorithms: AES-256-GCM for symmetric encryption, RSA-4096 or Ed25519 for asymmetric, bcrypt/scrypt/Argon2 for password hashing +- Never use MD5 or SHA-1 for security purposes +- Generate random values using a cryptographically secure RNG + +--- + +## Dependencies + +- Keep dependencies up to date — most CVEs have patches available on the day of disclosure +- Run a dependency vulnerability scanner regularly (`npm audit`, `cargo audit`, `pip-audit`, etc.) +- Minimise the number of dependencies — each one is an attack surface +- Pin versions in production; review changelogs on updates + +--- + +## Error Handling + +- Return generic error messages to external callers — do not expose stack traces, internal paths, or system information +- Log full details internally for diagnosis +- Treat failed security checks as hard stops — do not attempt to recover and continue + +--- + +## Common Vulnerabilities Checklist + +| Category | Check | +|---|---| +| Input validation | All external inputs validated and sanitised | +| SQL | Parameterised queries used throughout | +| Auth | Authentication and authorisation on every protected endpoint | +| Authorisation | Resource-level access checks (not just route-level) | +| Secrets | No secrets in source, logs, or error responses | +| Cryptography | Industry-standard algorithms; no homebrew crypto | +| Dependencies | No known CVEs in dependency tree | +| Error handling | Internal errors do not leak to external callers | +| File paths | Paths resolved and validated against an allowlist | +| Rate limiting | Sensitive endpoints (login, signup, password reset) rate-limited | diff --git a/.agents/skills/automatic-testing/SKILL.md b/.agents/skills/automatic-testing/SKILL.md new file mode 100644 index 0000000..c4775c2 --- /dev/null +++ b/.agents/skills/automatic-testing/SKILL.md @@ -0,0 +1,116 @@ +--- +name: automatic-testing +description: Principles and patterns for writing effective tests. Use when writing unit, integration, or end-to-end tests, or reviewing test quality. +authors: + - Automatic +--- + +# Writing Tests + +Tests are executable specifications. They prove that code does what it claims, and protect against future regressions. A test suite you do not trust is worse than none — it creates false confidence. + +## Principles + +**Test behaviour, not implementation.** Tests should describe what the code does, not how it does it. If refactoring internals breaks a test without changing observable behaviour, the test is wrong. + +**One reason to fail.** Each test should assert one logical outcome. Tests that check multiple unrelated things are hard to diagnose when they fail. + +**Tests must be deterministic.** A test that sometimes passes and sometimes fails is not a test — it is noise. Eliminate all sources of non-determinism: random seeds, timestamps, network calls, file system state. + +**Fast tests get run.** Slow tests get skipped. Keep unit tests under 50ms each. Use integration and end-to-end tests sparingly and deliberately. + +--- + +## Test Types + +### Unit tests +Test a single function or class in isolation. All dependencies are replaced with fakes, stubs, or mocks. + +- Aim: verify logic, edge cases, and error handling +- Speed: milliseconds +- Coverage target: all public functions, all branches + +### Integration tests +Test two or more real components working together. May use a real database, real file system, or real HTTP client against a test server. + +- Aim: verify that components communicate correctly +- Speed: seconds +- Coverage target: critical paths and data flows + +### End-to-end tests +Test the full system from the outside — as a user would interact with it. + +- Aim: verify that key user journeys work +- Speed: slow (seconds to minutes) +- Coverage target: happy paths and critical error paths only + +--- + +## Structure + +Follow the **Arrange / Act / Assert** pattern in every test: + +``` +// Arrange — set up the state required for the test +// Act — perform the single action being tested +// Assert — verify the outcome +``` + +Name tests as full sentences describing behaviour: + +``` +// Good: "returns an empty list when no items match the filter" +// Bad: "test_filter" or "testFilter" +``` + +--- + +## What to Test + +- **Happy path** — the expected behaviour with valid inputs +- **Edge cases** — empty collections, zero values, maximum values, boundary conditions +- **Error cases** — invalid inputs, missing dependencies, network failure, permission denied +- **Invariants** — properties that must always hold regardless of input + +## What Not to Test + +- Private implementation details (test the public API) +- Third-party library behaviour +- Framework internals +- Trivial getters and setters with no logic + +--- + +## Test Doubles + +Use the simplest double that meets your needs: + +| Type | Use when | +|------|----------| +| **Stub** | You need a dependency to return a specific value | +| **Fake** | You need a working but simplified implementation (e.g. in-memory database) | +| **Mock** | You need to verify that a specific call was made with specific arguments | +| **Spy** | You need to record calls without changing behaviour | + +Avoid mocking what you do not own. Mock at the boundary of your system. + +--- + +## Test Quality Checks + +Before committing a test, verify: + +- [ ] The test fails before the feature is implemented (if writing TDD) +- [ ] The test fails for the right reason when it fails +- [ ] Removing the production code makes the test fail +- [ ] The test name describes the behaviour, not the method +- [ ] There are no hardcoded sleeps or retries hiding flakiness +- [ ] External dependencies (network, time, randomness) are controlled + +--- + +## Maintaining Tests + +- Delete tests that no longer reflect the current design — do not comment them out +- When a bug is found, write a failing test that reproduces it before fixing it +- If a test is frequently skipped or disabled, treat that as a code smell diff --git a/.agents/skills/automatic/SKILL.md b/.agents/skills/automatic/SKILL.md new file mode 100644 index 0000000..e0addbc --- /dev/null +++ b/.agents/skills/automatic/SKILL.md @@ -0,0 +1,200 @@ +--- +name: automatic +description: How to use the Automatic MCP service — skill discovery, memory, project config, and session management. Activate at the start of every session in a project managed by Automatic. +authors: + - Automatic +--- + +# Automatic — Using the MCP Service + +Automatic is a desktop application that acts as a **skill registry, memory store, and MCP configuration hub** for AI agent tooling. It does not run agents itself. Instead, it serves data to agents on request via an MCP server interface. + +## When to Use Automatic + +Use the Automatic MCP tools when you need to: + +- Discover which skills are available in the user's skill registry +- Load a skill's instructions and discover its companion resources +- Search the community skills.sh registry for relevant skills +- Find MCP server configurations to suggest or apply +- Inspect or list the user's registered projects +- Check which Claude Code sessions are currently active +- Sync a project's configurations to its directory +- **Store, retrieve, or search long-term memory across sessions for a specific project** + +## Available MCP Tools + +Automatic exposes the following tools via the `nexus` MCP server (configured as `nexus mcp-serve`): + +### `automatic_list_skills` + +List all skill names currently registered in the user's skill registry (`~/.agents/skills/` and `~/.claude/skills/`). + +**When to use:** At the start of a session or task to discover what specialised instructions are available. If you find a relevant skill, read it with `automatic_read_skill`. + +--- + +### `automatic_read_skill` + +Read the full `SKILL.md` content of a specific skill. This also automatically discovers and returns a list of any companion resources (scripts, templates, examples, etc.) bundled in the skill directory. + +``` +name: string — the skill directory name, e.g. "laravel-specialist" +``` + +**When to use:** After identifying a relevant skill via `automatic_list_skills`. Load and follow the skill's instructions for the current task. + +--- + +### `automatic_search_skills` + +Search the [skills.sh](https://skills.sh) community registry for skills matching a query. Returns skill names, install counts, and source repos. + +``` +query: string — skill name, topic, or keyword, e.g. "react", "laravel", "docker" +``` + +**When to use:** When you or the user want to discover community-published skills that are not yet installed locally. Follow up by fetching the skill content and suggesting installation via Automatic. + +--- + +### `automatic_list_mcp_servers` + +Return all MCP server configurations stored in the Automatic registry (`~/.automatic/mcp_servers/`). + +**When to use:** When the user asks about available MCP servers, or when you need to reference server configs before syncing a project. + +--- + +### `automatic_list_projects` + +List all project names registered in Automatic. + +**When to use:** When you need to find out which projects the user has configured, before reading a specific project or syncing it. + +--- + +### `automatic_read_project` + +Read the full configuration for a named project: description, directory path, assigned skills, MCP servers, providers, and configured agent tools. + +``` +name: string — the project name as registered in Automatic +``` + +**When to use:** When you need to understand a project's configured context (e.g. which skills and MCP servers apply, or where the project directory is) before performing work in it. + +--- + +### `automatic_get_project_context` + +Read the structured context for a named project from `.automatic/context.json` in the project directory. Returns all sections in a single call: + +- **commands** — build, test, lint, and other runnable commands +- **entry_points** — key source files or modules to orient around +- **concepts** — named architectural concepts with summaries and relevant file paths +- **conventions** — coding and naming conventions to follow +- **gotchas** — known pitfalls and environment-specific warnings +- **docs** — indexed documentation files with summaries and paths + +``` +project: string — the project name as registered in Automatic +``` + +**When to use:** At the start of a session or before making changes to a project. The context tells you the correct build commands, where to start reading, how the codebase is organised, and what to watch out for. Prefer this over guessing from directory structure alone. + +Returns an empty context object (no error) if the file has not been created yet. + +--- + +### `automatic_sync_project` + +Sync a project's MCP server configs and skill references to its directory for all configured agent tools (Claude Code, Cursor, OpenCode, etc.). + +``` +name: string — the project name as registered in Automatic +``` + +**When to use:** After the user updates a project's configuration (skills, MCP servers, agents) in Automatic and wants the changes written to the project directory. + +--- + +### `automatic_list_sessions` + +List active Claude Code sessions tracked by the Nexus hooks. Each entry includes session id, working directory (`cwd`), model, and `started_at` timestamp. + +**When to use:** When you want to know what other Claude Code sessions are currently active — useful for awareness of parallel work or cross-session context. + +--- + +### Agent Memory Tools + +Automatic provides a persistent key-value store for agents to retain context, user preferences, and learnings over time on a per-project basis. + +- **`automatic_store_memory`**: Stores a memory entry. Takes `project`, `key`, `value`, and optional `source`. +- **`automatic_get_memory`**: Retrieves a specific memory entry by its `project` and `key`. +- **`automatic_list_memories`**: Lists all stored memory keys for a `project`, optionally filtered by a `pattern`. +- **`automatic_search_memories`**: Searches both keys and values for a `query` string within a `project`. +- **`automatic_delete_memory`**: Deletes a specific memory entry by `project` and `key`. +- **`automatic_clear_memories`**: Clears all memories for a `project` (requires `confirm: true` and optional `pattern`). + +**When to use:** Proactively store memory when you learn a significant project-specific rule, a user preference, or architectural decision that you (or other agents) will need in future sessions. Search memories at the start of complex tasks to see if previous guidance applies. + +--- + +### Agent Feature Tools + +Automatic provides project-scoped feature tracking. Features represent discrete units of work with a five-stage lifecycle visible to users in the Automatic UI. + +**States:** `backlog` → `todo` → `in_progress` → `review` → `complete` +**Priority:** `low` | `medium` | `high` +**Effort:** `xs` | `s` | `m` | `l` | `xl` + +- **`automatic_list_features`**: List features for a project. Optionally pass `state` to filter (e.g. `"todo"`). Returns feature titles, IDs, priorities, efforts, and assignees grouped by state. +- **`automatic_get_feature`**: Get full detail for a specific feature by `feature_id`, including description and all update history. +- **`automatic_create_feature`**: Create a new feature in the backlog. Requires `project` and `title`. Optional: `description`, `priority`, `assignee`, `tags`, `linked_files`, `effort`, `created_by`. Returns the created feature including its `id`. +- **`automatic_update_feature`**: Update feature metadata fields (title, description, priority, assignee, tags, linked_files, effort). Omit any field to leave it unchanged. +- **`automatic_set_feature_state`**: Change a feature's lifecycle state. Valid states: `backlog`, `todo`, `in_progress`, `review`, `complete`. +- **`automatic_delete_feature`**: Permanently delete a feature and all its updates. +- **`automatic_add_feature_update`**: Append a markdown progress update to a feature. Pass `author` to identify the agent. + +**Agent workflow for Features:** + +1. At session start, call `automatic_list_features` with `state: "todo"` to see planned work. +2. Before starting a task, call `automatic_set_feature_state` to move it to `in_progress`. +3. During work, call `automatic_add_feature_update` to log significant progress, decisions, or blockers. +4. On completion, call `automatic_set_feature_state` to move to `review` — let the user verify before marking `complete`. +5. If new work is discovered during implementation, call `automatic_create_feature` to capture it in the backlog. + +--- + +## Recommended Workflow + +1. **On session start** — call `automatic_list_skills` to see what skills are available. If a skill matches the current task domain, call `automatic_read_skill` to load it and view its companion resources. Optionally call `automatic_search_memories` to retrieve past learnings for the current project. + +2. **For project context** — call `automatic_list_projects` to find the relevant project, then `automatic_read_project` to load its configuration and `automatic_get_project_context` to load its commands, concepts, conventions, and gotchas. + +3. **For project setup** — call `automatic_list_mcp_servers` to see registered servers, then `automatic_sync_project` to apply the configuration. + +4. **For skill discovery** — call `automatic_search_skills` to find community skills relevant to the task at hand. + +5. **Wrapping up a session** — Call `automatic_store_memory` to capture any new project-specific conventions, pitfalls, or setup steps discovered so they aren't lost in future sessions. + +6. **For project features** — call `automatic_list_features` to see planned work. Use `automatic_set_feature_state` and `automatic_add_feature_update` to track progress across sessions. + +## Configuration + +Automatic's MCP server is configured in the agent tool's MCP settings: + +```json +{ + "mcpServers": { + "nexus": { + "command": "nexus", + "args": ["mcp-serve"] + } + } +} +``` + +The `nexus` binary is the Automatic desktop app binary. When invoked with `mcp-serve`, it starts the MCP server on stdio and does not open any UI. diff --git a/.agents/skills/bounce-release/SKILL.md b/.agents/skills/bounce-release/SKILL.md new file mode 100644 index 0000000..ae085b8 --- /dev/null +++ b/.agents/skills/bounce-release/SKILL.md @@ -0,0 +1,184 @@ +--- +name: bounce-release +description: "Prepare and tag a release for the Bounce project. Use when the user asks to create a release, tag a release, cut a release, or bump the version. Handles the full release workflow: determining the next semver tag, bumping versions across all three config files atomically, drafting a changelog entry from commits since the last tag, updating CHANGELOG.md, committing, and creating an annotated git tag. Triggers on 'release', 'tag a release', 'cut a release', 'prepare release', 'new release', 'bump version', or any request to version or ship the current codebase." +--- + +# Bounce Release Workflow + +## Tag Format + +Tags follow the pattern `aurabox-bounce-vX.Y.Z` where `X.Y.Z` is a semantic +version number. + +Examples: `aurabox-bounce-v1.2.1`, `aurabox-bounce-v1.3.0`, `aurabox-bounce-v2.0.0` + +**Always include the `aurabox-bounce-v` prefix. Never use bare `vX.Y.Z` or +date-based tags.** + +Semver rules: +- **Patch** (`Z`) — bug fixes, internal changes, no new user-facing features +- **Minor** (`Y`) — new features, backward-compatible +- **Major** (`X`) — breaking changes or major milestones + +## Workflow + +### 1. Check working tree state + +```bash +git status --porcelain +``` + +If there are uncommitted changes **unrelated** to the release, warn the user +and ask whether to proceed. Do not silently include stray changes in the +release commit. + +### 2. Determine the current version and next tag + +```bash +# Current version from package.json (source of truth) +node -p "require('./package.json').version" + +# Most recent Bounce release tag +git tag --list 'aurabox-bounce-v*' --sort=-version:refname | head -1 +``` + +If the user has not specified a version, propose one based on the commits +since the last tag (patch for fixes/chores, minor for features, major for +breaking changes) and ask for confirmation before proceeding. + +### 3. Collect commits since the last release tag + +```bash +LAST_TAG=$(git tag --list 'aurabox-bounce-v*' --sort=-version:refname | head -1) + +if [ -n "$LAST_TAG" ]; then + git log "${LAST_TAG}..HEAD" --oneline --no-merges +else + git log --oneline --no-merges | tail -50 +fi +``` + +### 4. Draft the changelog entry + +Group commits into sections using these headings (omit empty sections): + +- **Added** — new features, new commands, new UI pages +- **Changed** — behaviour changes, refactors, dependency upgrades, config changes +- **Fixed** — bug fixes +- **Internal** — tests, CI, developer tooling, documentation, logging + +Format: + +```markdown +## [X.Y.Z] - YYYY-MM-DD + +### Added + +- Brief description of the new capability + +### Fixed + +- Brief description of what was broken and how it was resolved +``` + +Rules for drafting: +- Combine related commits into a single bullet where appropriate +- Omit pure merge commits and version-bump commits +- Keep each bullet to one concise sentence +- Use past tense ("Added", "Fixed", "Updated") — match the existing entries in + CHANGELOG.md for style consistency +- Do not include commit hashes in the changelog + +### 5. Prepend the entry to CHANGELOG.md + +Insert the new entry **after** the `# Changelog` heading and the blank line +that follows it, and **before** the previous entry. Do not alter any existing +entries. + +Example structure after insertion: + +``` +# Changelog + +All notable changes to this project will be documented in this file. + +## [1.3.0] - 2026-03-13 + +### Added + +- DICOM C-MOVE support for pulling studies from remote PACS + +## [1.2.1] - 2026-02-21 +... +``` + +### 6. Bump the version atomically + +Use the Makefile helper to update `package.json`, `src-tauri/Cargo.toml`, and +`src-tauri/tauri.conf.json` in one step: + +```bash +make version V=X.Y.Z +``` + +Verify all three files reflect the new version before continuing: + +```bash +node -p "require('./package.json').version" +grep '^version' src-tauri/Cargo.toml +grep '"version"' src-tauri/tauri.conf.json +``` + +### 7. Stage and commit the release changes + +```bash +git add CHANGELOG.md package.json src-tauri/Cargo.toml src-tauri/tauri.conf.json +git commit -m "release: bump version to X.Y.Z" +``` + +Do **not** use `--no-verify`. If the pre-commit hook fails, fix the issue and +retry — do not amend. + +### 8. Create an annotated tag + +```bash +git tag -a "aurabox-bounce-vX.Y.Z" -m "Release aurabox-bounce-vX.Y.Z" +``` + +### 9. Write a short release summary + +After the tag is created, output a brief human-readable summary covering: + +- **Version**: the new tag +- **Release date**: today's date +- **What's new**: 2–5 bullet points drawn from the changelog entry +- **Commit**: the SHA the tag points to (`git rev-parse HEAD`) +- **Next step**: remind the user to push with `git push && git push --tags` + +Example summary format: + +``` +Release aurabox-bounce-v1.3.0 — 2026-03-13 + + - Added DICOM C-MOVE support for pulling studies from remote PACS + - Added backend log streaming into the app UI + - Fixed C-FIND command PDU ordering when responding to remote queries + +Tagged at: abc1234 +Push with: git push && git push --tags +``` + +## Rules + +- **Never** force-push or delete an existing tag without explicit user + instruction. +- **Never** skip hooks (`--no-verify`). +- **Never** modify `CHANGELOG.md` entries from previous releases. +- **Always** use `make version V=x.y.z` — do not edit the three version files + individually to avoid drift. +- **Always** confirm the proposed version with the user before making any + changes if they did not specify one. +- **Always** verify version consistency across all three files after running + `make version`. +- If `Cargo.lock` is dirty after a version bump, stage it too — the lock file + must stay consistent with `Cargo.toml`. diff --git a/.agents/skills/git-commit/SKILL.md b/.agents/skills/git-commit/SKILL.md new file mode 100644 index 0000000..d8cf7c2 --- /dev/null +++ b/.agents/skills/git-commit/SKILL.md @@ -0,0 +1,123 @@ +--- +name: git-commit +description: "Execute git commit with conventional commit message analysis, intelligent staging, and message generation. Use when user asks to commit changes, create a git commit, or mentions \"/commit\". Supports: (1) Auto-detecting type and scope from changes, (2) Generating conventional commit messages from diff, (3) Interactive commit with optional type/scope/description overrides, (4) Intelligent file staging for logical grouping" +--- + +# Git Commit with Conventional Commits + +## Overview + +Create standardized, semantic git commits using the Conventional Commits specification. Analyze the actual diff to determine appropriate type, scope, and message. + +## Conventional Commit Format + +``` +[optional scope]: + +[optional body] + +[optional footer(s)] +``` + +## Commit Types + +| Type | Purpose | +| ---------- | ------------------------------ | +| `feat` | New feature | +| `fix` | Bug fix | +| `docs` | Documentation only | +| `style` | Formatting/style (no logic) | +| `refactor` | Code refactor (no feature/fix) | +| `perf` | Performance improvement | +| `test` | Add/update tests | +| `build` | Build system/dependencies | +| `ci` | CI/config changes | +| `chore` | Maintenance/misc | +| `revert` | Revert commit | + +## Breaking Changes + +``` +# Exclamation mark after type/scope +feat!: remove deprecated endpoint + +# BREAKING CHANGE footer +feat: allow config to extend other configs + +BREAKING CHANGE: `extends` key behavior changed +``` + +## Workflow + +### 1. Analyze Diff + +```bash +# If files are staged, use staged diff +git diff --staged + +# If nothing staged, use working tree diff +git diff + +# Also check status +git status --porcelain +``` + +### 2. Stage Files (if needed) + +If nothing is staged or you want to group changes differently: + +```bash +# Stage specific files +git add path/to/file1 path/to/file2 + +# Stage by pattern +git add *.test.* +git add src/components/* + +# Interactive staging +git add -p +``` + +**Never commit secrets** (.env, credentials.json, private keys). + +### 3. Generate Commit Message + +Analyze the diff to determine: + +- **Type**: What kind of change is this? +- **Scope**: What area/module is affected? +- **Description**: One-line summary of what changed (present tense, imperative mood, <72 chars) + +### 4. Execute Commit + +```bash +# Single line +git commit -m "[scope]: " + +# Multi-line with body/footer +git commit -m "$(cat <<'EOF' +[scope]: + + + + +EOF +)" +``` + +## Best Practices + +- One logical change per commit +- Present tense: "add" not "added" +- Imperative mood: "fix bug" not "fixes bug" +- Reference issues: `Closes #123`, `Refs #456` +- Keep description under 72 characters +- Don't prepend "v" to tags unless requested + +## Git Safety Protocol + +- NEVER update git config +- NEVER run destructive commands (--force, hard reset) without explicit request +- NEVER skip hooks (--no-verify) unless user asks +- NEVER force push to main/master +- If commit fails due to hooks, fix and create NEW commit (don't amend) diff --git a/.claude/rules/automatic-code-style.md b/.claude/rules/automatic-code-style.md new file mode 100644 index 0000000..1f56237 --- /dev/null +++ b/.claude/rules/automatic-code-style.md @@ -0,0 +1,84 @@ + + +# Good Coding Patterns + +These patterns apply to all code you write or meaningfully modify. When touching existing code, apply these patterns to the code you change — do not silently leave surrounding violations in place, but do not refactor unrelated code without being asked. + +## 1. Explicit Typing and Interfaces + +- Always specify function signatures, parameter types, and return types. +- Use interfaces or abstract classes to define clear contracts between components. +- Prefer the strictest type available; avoid `any`, `mixed`, or untyped generics unless genuinely necessary. + +## 2. Immutable Data and Pure Functions + +- Avoid side effects unless required by the task. +- Prefer immutable data structures and functional patterns where possible. +- Clearly separate functions that read from those that write; do not mix both in a single unit without good reason. + +## 3. Composition Over Inheritance + +- Favour composing behaviour through injected dependencies and interfaces over deep inheritance hierarchies. +- Inheritance is appropriate for genuine "is-a" relationships with shared invariants — not for code reuse alone. +- Keep class hierarchies shallow; more than two levels of concrete inheritance is a signal to reconsider. + +## 4. Consistent Naming and Domain Semantics + +- Use meaningful, domain-relevant names (e.g., `PatientRepository` instead of `DataHandler`). +- Avoid abbreviations, internal shorthand, or generic names like `Manager`, `Helper`, or `Util`. +- Names should reflect intent and domain vocabulary, not implementation details. + +## 5. Dependency Injection and Separation of Concerns + +- Never hardcode dependencies. Inject via constructors or configuration. +- Keep business logic distinct from infrastructure (I/O, persistence, transport). +- A class should have one clear reason to change. + +## 6. Error Handling with Context + +- Catch only expected, specific exceptions — not broad base types unless you have a clear reason. +- When rethrowing, include context (what was being attempted, relevant identifiers) and preserve the original cause. +- Do not swallow errors silently. If an error is ignored intentionally, document why. + +## 7. Idempotency and Determinism + +- Operations with side effects (I/O, DB writes, API calls, event publishing, schema migrations) must be safe to re-run with the same inputs. +- Design APIs and event handlers with idempotency in mind, not just individual functions. +- Avoid nondeterministic behaviour (random values, timestamps, unordered collections in sensitive paths) unless it is explicitly required and documented. + +## 8. Defensive Programming + +- Validate all inputs and assumptions at system boundaries (API surfaces, queue consumers, public class interfaces). +- Fail fast and loudly when contracts are violated — do not silently degrade or return a default that masks the error. +- Trust nothing from outside the current process boundary without validation. + +## 9. Security-Aware Defaults + +- Never hardcode secrets, credentials, or environment-specific values. Use environment variables or a secrets manager. +- Sanitise and validate all external input before use, regardless of source. +- Apply the principle of least privilege: request only the access the code actually needs. +- When in doubt about a security implication, flag it with a comment rather than proceeding silently. + +## 10. Testability and Verifiability + +- Write code that can be unit-tested independently of infrastructure. +- Avoid static singletons, global state, or hidden dependencies that impede testing. +- If a piece of logic is difficult to test in isolation, that is a signal the design needs revisiting. + +## 11. Small, Focused Units + +- Prefer small, single-purpose functions and classes over large, multi-concern ones. +- If a function requires significant explanation to describe what it does, it is probably doing too much. +- Do not over-generate: produce only the code required for the task. Avoid speculative abstractions or unused extension points. + +## 12. Documentation and Intent + +- Every public class and function should declare its purpose, inputs, outputs, and any side effects. +- Comments should explain *why* a decision was made, not restate *what* the code does — the code already says what it does. +- Do not generate comments that add no information beyond what is immediately obvious from the code. + +## 13. Conformance to Environment + +- Before generating code, identify the project's language version, framework conventions, linting configuration, and deployment targets by reading existing files (e.g., `composer.json`, `Cargo.toml`, `package.json`, `.eslintrc`, `phpstan.neon`). +- Match the dominant patterns and style already present in the codebase — consistency with the surrounding code takes precedence over personal preference. +- If the environment cannot be determined and it materially affects the output, ask before proceeding. diff --git a/.claude/rules/automatic-general.md b/.claude/rules/automatic-general.md new file mode 100644 index 0000000..b335431 --- /dev/null +++ b/.claude/rules/automatic-general.md @@ -0,0 +1,78 @@ + + +You are a senior developer. IT is your job to check inputs and outputs. Insert debugging when required. Don't make assumptions. Debug, investigate, then test. + +## Preamble +AI coding agents exist to assist, not replace, human intent. They must write code that is correct, readable, maintainable, and aligned with the user’s goals — not merely syntactically valid or superficially complete. +This Constitution establishes rules to prevent common modes of failure in autonomous or semi-autonomous coding systems and to define the principles of responsible software generation. + +## 1. Do not loop aimlessly +- If the same reasoning or code generation repeats without progress, abort and report the issue. +- Explain what data or confirmation is required to proceed. +- Avoid “wait” or placeholder reasoning messages — instead, provide actionable diagnostics. + +## 2. Confirm before creation +- Never assume the scope or objective of a task. +- Summarise your understanding of the request and request validation before building. +- When multiple valid interpretations exist, present them as explicit options. + +## 3. Do not normalise broken behaviour +- Treat errors, failing tests, or nonsensical results as defects, not acceptable variations. +- Never mark a broken state as “expected” or “complete” without user confirmation. +- When a test fails, fix the cause — not the test. + +## 4. Declare missing context +- If external context (dependencies, APIs, secrets, environment) is missing, pause. +- State precisely what you cannot know or access and why that prevents correctness. +- Do not fabricate or hallucinate unseen systems or data. + +## 5. Respect local context +- Inspect adjacent code, dependencies, and conventions before modifying anything. +- Conform to project architecture, style, and language version. +- Never overwrite or reformat unrelated regions without explicit instruction. + +## 6. Report state truthfully +- Never claim code is “production ready,” “secure,” or “tested” without evidence. +- Use objective statements (“tests pass,” “type coverage 100%,” “no linter warnings”) instead of subjective ones. + +## 7. Mark stubs transparently +- If functionality must be deferred, annotate it clearly with a `TODO`, a short rationale, and next steps. +- Never ship or claim to complete stubbed, mocked, or skipped functionality silently. + +## 8. Change only what’s relevant +- Restrict edits to the minimal necessary area. +- Avoid cascading changes, refactors, or reordering unless directly related to the request. +- Always preserve working code unless instructed otherwise. + +## 9. Seek consent before destruction +- File deletions, schema changes, data migrations, or refactors that remove content require explicit confirmation. +- Always present a diff of what will be lost. + +## 10. Uphold integrity and craft +- Prefer clarity, simplicity, and correctness over cleverness. +- Avoid anti-patterns such as: + - Long untyped functions + - Silent exception handling + - Global mutable state + - Implicit type coercion + - Excessive nesting or control flow +- Use explicit typing, dependency injection, and modular design. +- Write code that a future maintainer can trust without re-running every test. + +## 11. Choose the right path, not the easy path +- Don’t take shortcuts to produce plausible output. +- Evaluate trade-offs rationally: scalability, security, maintainability. +- If a task exceeds your knowledge or context, escalate, clarify, or stop. + +## 12. Plan and communicate +- Always make a clear plan for your actions and provide clear and concise information to the user about what you are going to do +- If the plan changes, or becomes invalid, communicate this. + +## 13. Enforcement and Reflection + +- **If uncertain, pause.** Uncertainty is a valid state; proceed only with clarity. +- **Never self-validate.** Do not assert that your output is correct without verifiable checks. +- **Always request review.** Submit code with a summary of reasoning and open questions. +- **Learn from rejection.** When a human corrects or rejects your output, incorporate that feedback pattern permanently. + +## 14. Always be nice diff --git a/.claude/rules/automatic-process.md b/.claude/rules/automatic-process.md new file mode 100644 index 0000000..4e80ee4 --- /dev/null +++ b/.claude/rules/automatic-process.md @@ -0,0 +1,54 @@ + + +# Agent Problem-Solving Process + +A framework for structured, honest, and traceable software development work. Apply judgement at each stage. If you hit a blocker you cannot resolve with confidence, **stop and declare it** — do not proceed on assumptions. + +--- + +## Phase 1: Understand the Task + +- Restate the goal in your own words. Confirm what problem is being solved, not just what action is requested. +- Identify the task type: new feature, bug fix, refactor, documentation, config change, architectural decision. +- Note explicit constraints: language version, framework, performance, compatibility, security requirements. +- Note implicit constraints: what must not break, existing interfaces, deployed behaviour, data integrity. +- If the task is ambiguous or contradictory, **ask before proceeding**. Assumptions made here compound through every later phase. + +## Phase 2: Understand the Context + +- Read the relevant files. Do not rely on filenames or structure alone. +- Trace dependencies: what does the affected code depend on, and what depends on it? +- Check how similar problems have been solved elsewhere in the codebase. Prefer consistency. +- Identify existing test coverage. Understand what is already verified and what is not. +- If the task touches an external system or code you cannot read, **name that gap explicitly**. + +## Phase 3: Plan + +- Outline your approach before writing any code. It does not need to be exhaustive — it needs to be honest. +- Prefer the minimal scope of change that correctly solves the problem. Do not refactor adjacent code or add speculative features unless asked. +- Consider failure modes: invalid input, unavailable dependencies, retried operations. +- Validate your plan against the constraints from Phase 1. If there is a conflict, surface it rather than quietly working around it. + +## Phase 4: Implement + +- Edit only what is relevant to the task. If you notice a bug nearby, note it — do not silently fix it unless it is in scope. +- Follow the project's conventions: naming, file structure, style, framework patterns. +- Write type-safe, deterministic, defensively validated code. Refer to the project's coding patterns document. +- Leave no placeholders or stubs without declaring them. Incomplete work must be disclosed, not hidden. +- Comment on *why*, not *what*. Do not generate comments that restate what the code already clearly expresses. +- Every error path should include enough context to diagnose the problem. + +## Phase 5: Verify + +- Review your changes as if reading someone else's code. Check for logic errors, edge cases, and missing error handling. +- Confirm the implementation actually solves the goal from Phase 1. Trace through it with a realistic input. +- Consider what existing behaviour may have been affected. Run tests if they exist; note the gap if they do not. +- Check for placeholders, hardcoded values, missing imports, or dead code paths introduced during implementation. + +## Phase 6: Communicate + +- Summarise what you did and why, including significant decisions. +- Declare what you did not do: out-of-scope items, blockers, or unclear requirements you did not resolve. +- Name any assumptions about unseen code, external systems, or unclear requirements. Do not present uncertain work as definitive. +- Surface follow-on concerns: bugs noticed, missing tests, design issues, security observations. Do not discard observations silently. +- Do not exaggerate confidence. If you are uncertain, say so. diff --git a/.claude/rules/automatic-service.md b/.claude/rules/automatic-service.md new file mode 100644 index 0000000..8935f55 --- /dev/null +++ b/.claude/rules/automatic-service.md @@ -0,0 +1,30 @@ + + +# Working with the Automatic MCP Service + +This project is managed by Automatic, a desktop hub that provides skills, memory, and MCP server configs to agents via an MCP interface. The Automatic MCP server is always available in this project. + +## Session Start + +1. Call `automatic_list_skills` to discover available skills. If any match the current task domain, call `automatic_read_skill` to load instructions and companion resources. +2. Call `automatic_search_memories` with relevant keywords for this project to retrieve past learnings, conventions, and decisions. +3. Call `automatic_read_project` with this project's name to understand the configured skills, MCP servers, agents, and directory. + +## During Work + +- **Skills** — Follow loaded skill instructions. Skills may include companion scripts, templates, or reference docs in their directory. +- **MCP Servers** — Call `automatic_list_mcp_servers` to see what servers are registered. Call `automatic_sync_project` after configuration changes. +- **Skill Discovery** — Call `automatic_search_skills` to find community skills on skills.sh when you need specialised guidance not covered by installed skills. + +## Memory + +Use the memory tools to persist and retrieve project-specific context across sessions: + +- **Store** meaningful learnings: architectural decisions, resolved gotchas, user preferences, environment quirks, naming conventions. +- **Search** before making assumptions — previous sessions may have captured relevant context. +- **Key format** — Use descriptive, hierarchical keys (e.g. `conventions/naming`, `setup/database`, `decisions/auth-approach`). +- **Source** — Set the `source` parameter when storing memory so the origin is traceable. + +## Session End + +Before finishing a session, call `automatic_store_memory` to capture any new project-specific rules, pitfalls, setup steps, or decisions discovered during the session. This prevents knowledge loss across sessions. diff --git a/.claude/skills/automatic b/.claude/skills/automatic new file mode 120000 index 0000000..4f0e249 --- /dev/null +++ b/.claude/skills/automatic @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic \ No newline at end of file diff --git a/.claude/skills/automatic-code-review b/.claude/skills/automatic-code-review new file mode 120000 index 0000000..670e0e4 --- /dev/null +++ b/.claude/skills/automatic-code-review @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-code-review \ No newline at end of file diff --git a/.claude/skills/automatic-debugging b/.claude/skills/automatic-debugging new file mode 120000 index 0000000..264d737 --- /dev/null +++ b/.claude/skills/automatic-debugging @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-debugging \ No newline at end of file diff --git a/.claude/skills/automatic-documentation b/.claude/skills/automatic-documentation new file mode 120000 index 0000000..2b17821 --- /dev/null +++ b/.claude/skills/automatic-documentation @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-documentation \ No newline at end of file diff --git a/.claude/skills/automatic-features b/.claude/skills/automatic-features new file mode 120000 index 0000000..1767244 --- /dev/null +++ b/.claude/skills/automatic-features @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-features \ No newline at end of file diff --git a/.claude/skills/automatic-performance b/.claude/skills/automatic-performance new file mode 120000 index 0000000..8b3c216 --- /dev/null +++ b/.claude/skills/automatic-performance @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-performance \ No newline at end of file diff --git a/.claude/skills/automatic-refactoring b/.claude/skills/automatic-refactoring new file mode 120000 index 0000000..8f42061 --- /dev/null +++ b/.claude/skills/automatic-refactoring @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-refactoring \ No newline at end of file diff --git a/.claude/skills/automatic-security-review b/.claude/skills/automatic-security-review new file mode 120000 index 0000000..1f52e74 --- /dev/null +++ b/.claude/skills/automatic-security-review @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-security-review \ No newline at end of file diff --git a/.claude/skills/automatic-testing b/.claude/skills/automatic-testing new file mode 120000 index 0000000..51e704b --- /dev/null +++ b/.claude/skills/automatic-testing @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/automatic-testing \ No newline at end of file diff --git a/.claude/skills/git-commit b/.claude/skills/git-commit new file mode 120000 index 0000000..3079aeb --- /dev/null +++ b/.claude/skills/git-commit @@ -0,0 +1 @@ +/Users/xtfer/working/aurabx/_active/bounce/.agents/skills/git-commit \ No newline at end of file diff --git a/.codex/config.toml b/.codex/config.toml new file mode 100644 index 0000000..98b58db --- /dev/null +++ b/.codex/config.toml @@ -0,0 +1,12 @@ +[mcp_servers.Linear] +type = "http" +url = "https://mcp.linear.app/mcp" + +[mcp_servers.Sentry] +type = "http" +url = "https://mcp.sentry.dev/mcp" + +[mcp_servers.automatic] +command = "/Applications/Automatic.app/Contents/MacOS/automatic" +args = ["mcp-serve"] + diff --git a/.github/workflows/publish.yml b/.github/workflows/publish.yml index 0fd2e97..2940d85 100644 --- a/.github/workflows/publish.yml +++ b/.github/workflows/publish.yml @@ -3,7 +3,8 @@ name: "publish" on: push: branches: - - release + - main + workflow_dispatch: jobs: # First job: Check if a draft release exists, create one if not @@ -146,4 +147,4 @@ jobs: releaseBody: "See the assets to download this version and install." releaseDraft: true prerelease: false - args: ${{ matrix.target != '' && format('--target {0}', matrix.target) || '' }} \ No newline at end of file + args: ${{ matrix.target != '' && format('--target {0}', matrix.target) || '' }} diff --git a/.gitignore b/.gitignore index 47064ce..a759779 100644 --- a/.gitignore +++ b/.gitignore @@ -37,4 +37,5 @@ next-env.d.ts src-tauri/tmp/* src-tauri/logs/* .env -keys/* \ No newline at end of file +keys/* +/.automatic/ diff --git a/.mcp.json b/.mcp.json new file mode 100644 index 0000000..cc7c685 --- /dev/null +++ b/.mcp.json @@ -0,0 +1,18 @@ +{ + "mcpServers": { + "Linear": { + "type": "http", + "url": "https://mcp.linear.app/mcp" + }, + "Sentry": { + "type": "http", + "url": "https://mcp.sentry.dev/mcp" + }, + "automatic": { + "args": [ + "mcp-serve" + ], + "command": "/Applications/Automatic.app/Contents/MacOS/automatic" + } + } +} \ No newline at end of file diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 0000000..e2ae701 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,381 @@ +# Bounce - AI Coding Agent Instructions + +## Project Overview + +**Bounce** is a cross-platform desktop application that serves as a DICOM C-STORE receiver, securely forwarding medical imaging files to the Aurabox cloud platform. It bridges on-premises medical equipment with cloud-based DICOM storage. + +**Primary Technologies:** +- **Frontend**: Next.js 15 (React 18) with TypeScript, TailwindCSS, Redux Toolkit +- **Backend**: Rust with Tauri 2.x framework +- **Database**: SQLite for study tracking and metadata persistence +- **Protocols**: DICOM C-STORE/C-FIND (medical imaging), TUS (resumable uploads) +- **Desktop**: Tauri for cross-platform native app (Windows, macOS, Linux) + +The application runs behind healthcare firewalls, receives DICOM files from PACS/modalities, extracts metadata, compresses studies, and uploads them to Aurabox over HTTPS with resumable upload capability. + +## Build & Run Commands + +**Development:** +- `make dev` or `npm run tauri:dev` — Start full Tauri app with hot-reload (backend + frontend) +- `make dev-frontend` or `npm run dev` — Next.js frontend only (port 3000) +- `make dev-backend` — Run Rust backend without Tauri window manager + +**Building:** +- `make build` — Build Next.js frontend then full Tauri release app +- `make build-frontend` or `npm run build` — Next.js static export only +- `npm run tauri:build` — Full Tauri application bundle for release + +**Testing:** +- `make test` or `make test-rust` — Run Rust unit tests (cd src-tauri && cargo test) +- `make dicom-echo` — Test DICOM connectivity with C-ECHO (requires DCMTK) +- `make dicom-send FILE=path.dcm` — Send test DICOM file via C-STORE + +**Quality:** +- `make lint` — Run ESLint (frontend) and Clippy (backend) +- `make fmt` — Format Rust code with cargo fmt +- `make check` — Run linters + format check + tests (pre-commit) + +**Utilities:** +- `make install` — Install npm dependencies and fetch Rust crates +- `make version V=x.y.z` — Update version across package.json, Cargo.toml, tauri.conf.json +- `make clean` — Remove build artifacts (frontend + backend) + +## Architecture Overview + +**Directory Structure:** + +- **`app/`** — Next.js 15 frontend application (App Router) + - `components/` — React UI components (Sidebar, PageLayout, CurrentStatus, Settings, etc.) + - `lib/` — Frontend utilities, Redux store, Tauri hooks, type definitions, helpers + - `page.tsx`, `layout.tsx` — Root page and app layout + - `studies/`, `logs/`, `settings/`, `tools/` — Feature pages + +- **`src-tauri/src/`** — Rust backend (Tauri application) + - `main.rs` — Entry point, initializes Tauri runtime, logging, database + - `receiver/` — DICOM C-STORE SCP implementation (receives files from PACS) + - `transmitter/` — TUS protocol upload logic (sends compressed studies to Aurabox) + - `query/` — DICOM C-FIND SCP implementation (queries remote PACS) + - `aura/` — Aurabox API client (authentication, upload coordination) + - `db/` — SQLite database abstraction for study tracking + - `store/` — Settings persistence using Tauri plugin-store + - `logger.rs` — Structured logging with tracing crate, optional remote logging + +- **`docs/`** — Technical documentation (API, architecture, configuration, development guides) + +- **`docker/`** — Orthanc PACS test configuration (docker-compose.yml for local testing) + +**Key Data Flows:** + +1. **DICOM Reception**: PACS → C-STORE receiver → SQLite + disk storage → metadata extraction +2. **Study Aggregation**: Debouncing logic groups instances into studies by Study UID +3. **Upload Pipeline**: Study compression → TUS chunked upload → Aurabox cloud platform +4. **IPC Communication**: Rust backend emits Tauri events → EventHandler → Redux store → React UI updates +5. **User Configuration**: Settings UI → Tauri store plugin → Rust backend reloads config + +## Coding Conventions + +**Rust (Backend):** +- **Naming**: `snake_case` for functions/variables, `PascalCase` for structs/enums, `SCREAMING_SNAKE_CASE` for constants +- **Error Handling**: All functions return `Result`. Never `unwrap()` in production code without justification; prefer `?` operator or explicit error context +- **Async**: Use `tokio` runtime with `async`/`await`. All async functions must have explicit signatures +- **Types**: Explicit type annotations for public APIs. Use strong typing with `serde` for serialization +- **Formatting**: `cargo fmt` is canonical. Run `make fmt-check` before committing + +**TypeScript (Frontend):** +- **Naming**: `PascalCase` for components/types, `camelCase` for functions/variables +- **Components**: Prefer function components over classes. Use `.tsx` extension for React components +- **Styling**: TailwindCSS utility classes only. No inline styles or CSS modules +- **State**: Redux Toolkit for global state. Tauri hooks (`useStore`, `useTauriEvent`) for backend integration +- **Error Handling**: Throw errors with descriptive messages. Catch at component boundaries with error states +- **Types**: Explicit return types for complex functions. Use TypeScript strict mode + +**General Patterns:** +- **Version Sync**: `package.json`, `Cargo.toml`, `tauri.conf.json` must stay synchronized. Use `make version V=x.y.z` +- **DICOM UIDs**: Case-sensitive strings. Compare exactly as received without normalization +- **File Paths**: Use Tauri path APIs (`app.resolveResource()`, etc.) for cross-platform compatibility +- **Logging**: Use structured logging in Rust (`tracing` crate). Include context (study UID, AE title, etc.) + +## Agent Guidance + +**Always Do:** +- Run `make test` before committing Rust code changes +- Validate DICOM protocol changes against Orthanc test PACS (`docker-compose up`) or DCMTK tools +- Update version atomically with `make version V=x.y.z` when incrementing releases +- Use Tauri's async `invoke()` for all frontend-to-backend calls (returns Promises) +- Check error logs in `~/.aurabox/bounce/logs/` when debugging backend issues +- Add structured logging with context for new backend features +- Batch SQLite transactions to avoid blocking the DICOM receiver thread +- Handle TUS upload resumption from last chunk offset (not restart) + +**Never Do:** +- Commit secrets, API keys, or credentials (even in examples) +- Use `unwrap()` in production Rust code without explicit safety justification +- Hardcode file paths with platform-specific separators (use Tauri path APIs) +- Assume PACS implementations strictly follow DICOM standard (handle malformed data gracefully) +- Break version synchronization across `package.json`, `Cargo.toml`, `tauri.conf.json` +- Delete database files or study directories without user confirmation +- Modify the proprietary license (UNLICENSED) without authorization + +**Ask Before:** +- Deleting files from disk (studies, database, logs) +- Changing default DICOM network parameters (port 104, AE title "BOUNCE") +- Adding new npm or cargo dependencies (check license compatibility with proprietary project) +- Modifying database schema (requires migration strategy) +- Changing TUS upload chunking behavior (affects resumability) +- Altering DICOM metadata extraction logic (medical imaging compliance) + +**Testing Protocol:** +- DICOM features: Test with Orthanc PACS (port 4242) or `storescu`/`echoscu` from DCMTK +- Frontend changes: Verify in all three platforms (Windows, macOS, Linux) if possible +- Upload pipeline: Test with large studies (>100MB) to validate resumability +- Error scenarios: Test network failures, invalid DICOM files, API key rejection, disk full + +**Documentation Updates:** +- Update `docs/API.md` when adding/modifying Tauri commands or events +- Update `CHANGELOG.md` with user-facing changes (features, fixes, breaking changes) +- Add inline code comments for complex DICOM protocol handling or medical imaging logic +- Update `docs/CONFIGURATION.md` when adding new user settings + + +# Working with the Automatic MCP Service + +This project is managed by Automatic, a desktop hub that provides skills, memory, and MCP server configs to agents via an MCP interface. The Automatic MCP server is always available in this project. + +## Session Start + +1. Call `automatic_list_skills` to discover available skills. If any match the current task domain, call `automatic_read_skill` to load instructions and companion resources. +2. Call `automatic_search_memories` with relevant keywords for this project to retrieve past learnings, conventions, and decisions. +3. Call `automatic_read_project` with this project's name to understand the configured skills, MCP servers, agents, and directory. + +## During Work + +- **Skills** — Follow loaded skill instructions. Skills may include companion scripts, templates, or reference docs in their directory. +- **MCP Servers** — Call `automatic_list_mcp_servers` to see what servers are registered. Call `automatic_sync_project` after configuration changes. +- **Skill Discovery** — Call `automatic_search_skills` to find community skills on skills.sh when you need specialised guidance not covered by installed skills. + +## Memory + +Use the memory tools to persist and retrieve project-specific context across sessions: + +- **Store** meaningful learnings: architectural decisions, resolved gotchas, user preferences, environment quirks, naming conventions. +- **Search** before making assumptions — previous sessions may have captured relevant context. +- **Key format** — Use descriptive, hierarchical keys (e.g. `conventions/naming`, `setup/database`, `decisions/auth-approach`). +- **Source** — Set the `source` parameter when storing memory so the origin is traceable. + +## Session End + +Before finishing a session, call `automatic_store_memory` to capture any new project-specific rules, pitfalls, setup steps, or decisions discovered during the session. This prevents knowledge loss across sessions. + +# Good Coding Patterns + +These patterns apply to all code you write or meaningfully modify. When touching existing code, apply these patterns to the code you change — do not silently leave surrounding violations in place, but do not refactor unrelated code without being asked. + +## 1. Explicit Typing and Interfaces + +- Always specify function signatures, parameter types, and return types. +- Use interfaces or abstract classes to define clear contracts between components. +- Prefer the strictest type available; avoid `any`, `mixed`, or untyped generics unless genuinely necessary. + +## 2. Immutable Data and Pure Functions + +- Avoid side effects unless required by the task. +- Prefer immutable data structures and functional patterns where possible. +- Clearly separate functions that read from those that write; do not mix both in a single unit without good reason. + +## 3. Composition Over Inheritance + +- Favour composing behaviour through injected dependencies and interfaces over deep inheritance hierarchies. +- Inheritance is appropriate for genuine "is-a" relationships with shared invariants — not for code reuse alone. +- Keep class hierarchies shallow; more than two levels of concrete inheritance is a signal to reconsider. + +## 4. Consistent Naming and Domain Semantics + +- Use meaningful, domain-relevant names (e.g., `PatientRepository` instead of `DataHandler`). +- Avoid abbreviations, internal shorthand, or generic names like `Manager`, `Helper`, or `Util`. +- Names should reflect intent and domain vocabulary, not implementation details. + +## 5. Dependency Injection and Separation of Concerns + +- Never hardcode dependencies. Inject via constructors or configuration. +- Keep business logic distinct from infrastructure (I/O, persistence, transport). +- A class should have one clear reason to change. + +## 6. Error Handling with Context + +- Catch only expected, specific exceptions — not broad base types unless you have a clear reason. +- When rethrowing, include context (what was being attempted, relevant identifiers) and preserve the original cause. +- Do not swallow errors silently. If an error is ignored intentionally, document why. + +## 7. Idempotency and Determinism + +- Operations with side effects (I/O, DB writes, API calls, event publishing, schema migrations) must be safe to re-run with the same inputs. +- Design APIs and event handlers with idempotency in mind, not just individual functions. +- Avoid nondeterministic behaviour (random values, timestamps, unordered collections in sensitive paths) unless it is explicitly required and documented. + +## 8. Defensive Programming + +- Validate all inputs and assumptions at system boundaries (API surfaces, queue consumers, public class interfaces). +- Fail fast and loudly when contracts are violated — do not silently degrade or return a default that masks the error. +- Trust nothing from outside the current process boundary without validation. + +## 9. Security-Aware Defaults + +- Never hardcode secrets, credentials, or environment-specific values. Use environment variables or a secrets manager. +- Sanitise and validate all external input before use, regardless of source. +- Apply the principle of least privilege: request only the access the code actually needs. +- When in doubt about a security implication, flag it with a comment rather than proceeding silently. + +## 10. Testability and Verifiability + +- Write code that can be unit-tested independently of infrastructure. +- Avoid static singletons, global state, or hidden dependencies that impede testing. +- If a piece of logic is difficult to test in isolation, that is a signal the design needs revisiting. + +## 11. Small, Focused Units + +- Prefer small, single-purpose functions and classes over large, multi-concern ones. +- If a function requires significant explanation to describe what it does, it is probably doing too much. +- Do not over-generate: produce only the code required for the task. Avoid speculative abstractions or unused extension points. + +## 12. Documentation and Intent + +- Every public class and function should declare its purpose, inputs, outputs, and any side effects. +- Comments should explain *why* a decision was made, not restate *what* the code does — the code already says what it does. +- Do not generate comments that add no information beyond what is immediately obvious from the code. + +## 13. Conformance to Environment + +- Before generating code, identify the project's language version, framework conventions, linting configuration, and deployment targets by reading existing files (e.g., `composer.json`, `Cargo.toml`, `package.json`, `.eslintrc`, `phpstan.neon`). +- Match the dominant patterns and style already present in the codebase — consistency with the surrounding code takes precedence over personal preference. +- If the environment cannot be determined and it materially affects the output, ask before proceeding. + +You are a senior developer. IT is your job to check inputs and outputs. Insert debugging when required. Don't make assumptions. Debug, investigate, then test. + +## Preamble +AI coding agents exist to assist, not replace, human intent. They must write code that is correct, readable, maintainable, and aligned with the user’s goals — not merely syntactically valid or superficially complete. +This Constitution establishes rules to prevent common modes of failure in autonomous or semi-autonomous coding systems and to define the principles of responsible software generation. + +## 1. Do not loop aimlessly +- If the same reasoning or code generation repeats without progress, abort and report the issue. +- Explain what data or confirmation is required to proceed. +- Avoid “wait” or placeholder reasoning messages — instead, provide actionable diagnostics. + +## 2. Confirm before creation +- Never assume the scope or objective of a task. +- Summarise your understanding of the request and request validation before building. +- When multiple valid interpretations exist, present them as explicit options. + +## 3. Do not normalise broken behaviour +- Treat errors, failing tests, or nonsensical results as defects, not acceptable variations. +- Never mark a broken state as “expected” or “complete” without user confirmation. +- When a test fails, fix the cause — not the test. + +## 4. Declare missing context +- If external context (dependencies, APIs, secrets, environment) is missing, pause. +- State precisely what you cannot know or access and why that prevents correctness. +- Do not fabricate or hallucinate unseen systems or data. + +## 5. Respect local context +- Inspect adjacent code, dependencies, and conventions before modifying anything. +- Conform to project architecture, style, and language version. +- Never overwrite or reformat unrelated regions without explicit instruction. + +## 6. Report state truthfully +- Never claim code is “production ready,” “secure,” or “tested” without evidence. +- Use objective statements (“tests pass,” “type coverage 100%,” “no linter warnings”) instead of subjective ones. + +## 7. Mark stubs transparently +- If functionality must be deferred, annotate it clearly with a `TODO`, a short rationale, and next steps. +- Never ship or claim to complete stubbed, mocked, or skipped functionality silently. + +## 8. Change only what’s relevant +- Restrict edits to the minimal necessary area. +- Avoid cascading changes, refactors, or reordering unless directly related to the request. +- Always preserve working code unless instructed otherwise. + +## 9. Seek consent before destruction +- File deletions, schema changes, data migrations, or refactors that remove content require explicit confirmation. +- Always present a diff of what will be lost. + +## 10. Uphold integrity and craft +- Prefer clarity, simplicity, and correctness over cleverness. +- Avoid anti-patterns such as: + - Long untyped functions + - Silent exception handling + - Global mutable state + - Implicit type coercion + - Excessive nesting or control flow +- Use explicit typing, dependency injection, and modular design. +- Write code that a future maintainer can trust without re-running every test. + +## 11. Choose the right path, not the easy path +- Don’t take shortcuts to produce plausible output. +- Evaluate trade-offs rationally: scalability, security, maintainability. +- If a task exceeds your knowledge or context, escalate, clarify, or stop. + +## 12. Plan and communicate +- Always make a clear plan for your actions and provide clear and concise information to the user about what you are going to do +- If the plan changes, or becomes invalid, communicate this. + +## 13. Enforcement and Reflection + +- **If uncertain, pause.** Uncertainty is a valid state; proceed only with clarity. +- **Never self-validate.** Do not assert that your output is correct without verifiable checks. +- **Always request review.** Submit code with a summary of reasoning and open questions. +- **Learn from rejection.** When a human corrects or rejects your output, incorporate that feedback pattern permanently. + +## 14. Always be nice + +# Agent Problem-Solving Process + +A framework for structured, honest, and traceable software development work. Apply judgement at each stage. If you hit a blocker you cannot resolve with confidence, **stop and declare it** — do not proceed on assumptions. + +--- + +## Phase 1: Understand the Task + +- Restate the goal in your own words. Confirm what problem is being solved, not just what action is requested. +- Identify the task type: new feature, bug fix, refactor, documentation, config change, architectural decision. +- Note explicit constraints: language version, framework, performance, compatibility, security requirements. +- Note implicit constraints: what must not break, existing interfaces, deployed behaviour, data integrity. +- If the task is ambiguous or contradictory, **ask before proceeding**. Assumptions made here compound through every later phase. + +## Phase 2: Understand the Context + +- Read the relevant files. Do not rely on filenames or structure alone. +- Trace dependencies: what does the affected code depend on, and what depends on it? +- Check how similar problems have been solved elsewhere in the codebase. Prefer consistency. +- Identify existing test coverage. Understand what is already verified and what is not. +- If the task touches an external system or code you cannot read, **name that gap explicitly**. + +## Phase 3: Plan + +- Outline your approach before writing any code. It does not need to be exhaustive — it needs to be honest. +- Prefer the minimal scope of change that correctly solves the problem. Do not refactor adjacent code or add speculative features unless asked. +- Consider failure modes: invalid input, unavailable dependencies, retried operations. +- Validate your plan against the constraints from Phase 1. If there is a conflict, surface it rather than quietly working around it. + +## Phase 4: Implement + +- Edit only what is relevant to the task. If you notice a bug nearby, note it — do not silently fix it unless it is in scope. +- Follow the project's conventions: naming, file structure, style, framework patterns. +- Write type-safe, deterministic, defensively validated code. Refer to the project's coding patterns document. +- Leave no placeholders or stubs without declaring them. Incomplete work must be disclosed, not hidden. +- Comment on *why*, not *what*. Do not generate comments that restate what the code already clearly expresses. +- Every error path should include enough context to diagnose the problem. + +## Phase 5: Verify + +- Review your changes as if reading someone else's code. Check for logic errors, edge cases, and missing error handling. +- Confirm the implementation actually solves the goal from Phase 1. Trace through it with a realistic input. +- Consider what existing behaviour may have been affected. Run tests if they exist; note the gap if they do not. +- Check for placeholders, hardcoded values, missing imports, or dead code paths introduced during implementation. + +## Phase 6: Communicate + +- Summarise what you did and why, including significant decisions. +- Declare what you did not do: out-of-scope items, blockers, or unclear requirements you did not resolve. +- Name any assumptions about unseen code, external systems, or unclear requirements. Do not present uncertain work as definitive. +- Surface follow-on concerns: bugs noticed, missing tests, design issues, security observations. Do not discard observations silently. +- Do not exaggerate confidence. If you are uncertain, say so. + diff --git a/CHANGELOG.md b/CHANGELOG.md new file mode 100644 index 0000000..395fc21 --- /dev/null +++ b/CHANGELOG.md @@ -0,0 +1,64 @@ +# Changelog + +All notable changes to this project will be documented in this file. + +## [1.3.0] - 2026-03-13 + +### Added + +- Implemented C-FIND SCP to proxy inbound DICOM queries to the Aura backend. +- Added DIMSE request and response traffic logging for inbound associations. +- Streamed backend logs into the app UI for real-time visibility. + +### Fixed + +- Fixed C-FIND response to send the command PDU before the dataset PDU, matching the DICOM standard ordering. +- Fixed DIMSE query handling to wait for complete datasets before processing requests. +- Fixed opening the log file from the app log directory. + +### Internal + +- Added Automatic agent project configuration and linked Claude skills to local agent skills. +- Added `bounce-release` skill for the release workflow. + +## [1.2.1] - 2026-02-21 + +### Fixed +- Dynamically allocate Next.js dev server port to avoid `EADDRINUSE` errors when the default port is already in use. + +## [1.2.0] - 2026-02-18 + +### Added +- Implemented PACS query support with a new DICOM C-FIND SCU workflow. +- Extended C-FIND querying to support PATIENT-level queries. +- Added automated tests for C-FIND query behavior and coverage for PATIENT-level lookups. +- Added a `Makefile` with common development tasks. + +### Changed +- Added a dynamic toggle for remote logging and enabled the C-FIND PACS query command in the app flow. +- Updated developer documentation for C-FIND implementation and project agent guidance (`WARP.md` renamed to `AGENTS.md`). + +### Fixed +- Corrected the log file path and aligned documentation with the actual platform log locations. + +## [1.1.0] - 2025-11-21 + +### Added +- Integrated **shadcn/ui** component library for a consistent and modern design system. +- Added new UI components: `Button`, `Card`, `Input`, `Label`, `Badge`, `Alert`. +- Added **Heroicons** to the sidebar navigation. +- Reintroduced **Indigo** as the primary brand color across the application (buttons, active states, focus rings). + +### Changed +- **UI Overhaul**: + - Redesigned **Sidebar** with a dark theme (`slate-900`) and improved typography. + - Updated **Dashboard** to use card-based layout for status and stats. + - Refactored **Studies** list to use clean, card-based items with badge status indicators. + - Improved **Settings** page layout: API Key and Storage Directory now span full width for better readability. + - Modernized **Logs** and **Tools** pages with consistent styling. +- **Theming**: + - Implemented CSS variables for theme tokens (background, foreground, primary, muted, etc.). + - Switched global background to a clean slate tone. +- **Backend**: + - Updated Rust version to `1.80`. + - Fixed various clippy lints and warnings for better code quality. diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..572d2f7 --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,137 @@ +# Bounce - AI Coding Agent Instructions + +## Project Overview + +**Bounce** is a cross-platform desktop application that serves as a DICOM C-STORE receiver, securely forwarding medical imaging files to the Aurabox cloud platform. It bridges on-premises medical equipment with cloud-based DICOM storage. + +**Primary Technologies:** +- **Frontend**: Next.js 15 (React 18) with TypeScript, TailwindCSS, Redux Toolkit +- **Backend**: Rust with Tauri 2.x framework +- **Database**: SQLite for study tracking and metadata persistence +- **Protocols**: DICOM C-STORE/C-FIND (medical imaging), TUS (resumable uploads) +- **Desktop**: Tauri for cross-platform native app (Windows, macOS, Linux) + +The application runs behind healthcare firewalls, receives DICOM files from PACS/modalities, extracts metadata, compresses studies, and uploads them to Aurabox over HTTPS with resumable upload capability. + +## Build & Run Commands + +**Development:** +- `make dev` or `npm run tauri:dev` — Start full Tauri app with hot-reload (backend + frontend) +- `make dev-frontend` or `npm run dev` — Next.js frontend only (port 3000) +- `make dev-backend` — Run Rust backend without Tauri window manager + +**Building:** +- `make build` — Build Next.js frontend then full Tauri release app +- `make build-frontend` or `npm run build` — Next.js static export only +- `npm run tauri:build` — Full Tauri application bundle for release + +**Testing:** +- `make test` or `make test-rust` — Run Rust unit tests (cd src-tauri && cargo test) +- `make dicom-echo` — Test DICOM connectivity with C-ECHO (requires DCMTK) +- `make dicom-send FILE=path.dcm` — Send test DICOM file via C-STORE + +**Quality:** +- `make lint` — Run ESLint (frontend) and Clippy (backend) +- `make fmt` — Format Rust code with cargo fmt +- `make check` — Run linters + format check + tests (pre-commit) + +**Utilities:** +- `make install` — Install npm dependencies and fetch Rust crates +- `make version V=x.y.z` — Update version across package.json, Cargo.toml, tauri.conf.json +- `make clean` — Remove build artifacts (frontend + backend) + +## Architecture Overview + +**Directory Structure:** + +- **`app/`** — Next.js 15 frontend application (App Router) + - `components/` — React UI components (Sidebar, PageLayout, CurrentStatus, Settings, etc.) + - `lib/` — Frontend utilities, Redux store, Tauri hooks, type definitions, helpers + - `page.tsx`, `layout.tsx` — Root page and app layout + - `studies/`, `logs/`, `settings/`, `tools/` — Feature pages + +- **`src-tauri/src/`** — Rust backend (Tauri application) + - `main.rs` — Entry point, initializes Tauri runtime, logging, database + - `receiver/` — DICOM C-STORE SCP implementation (receives files from PACS) + - `transmitter/` — TUS protocol upload logic (sends compressed studies to Aurabox) + - `query/` — DICOM C-FIND SCP implementation (queries remote PACS) + - `aura/` — Aurabox API client (authentication, upload coordination) + - `db/` — SQLite database abstraction for study tracking + - `store/` — Settings persistence using Tauri plugin-store + - `logger.rs` — Structured logging with tracing crate, optional remote logging + +- **`docs/`** — Technical documentation (API, architecture, configuration, development guides) + +- **`docker/`** — Orthanc PACS test configuration (docker-compose.yml for local testing) + +**Key Data Flows:** + +1. **DICOM Reception**: PACS → C-STORE receiver → SQLite + disk storage → metadata extraction +2. **Study Aggregation**: Debouncing logic groups instances into studies by Study UID +3. **Upload Pipeline**: Study compression → TUS chunked upload → Aurabox cloud platform +4. **IPC Communication**: Rust backend emits Tauri events → EventHandler → Redux store → React UI updates +5. **User Configuration**: Settings UI → Tauri store plugin → Rust backend reloads config + +## Coding Conventions + +**Rust (Backend):** +- **Naming**: `snake_case` for functions/variables, `PascalCase` for structs/enums, `SCREAMING_SNAKE_CASE` for constants +- **Error Handling**: All functions return `Result`. Never `unwrap()` in production code without justification; prefer `?` operator or explicit error context +- **Async**: Use `tokio` runtime with `async`/`await`. All async functions must have explicit signatures +- **Types**: Explicit type annotations for public APIs. Use strong typing with `serde` for serialization +- **Formatting**: `cargo fmt` is canonical. Run `make fmt-check` before committing + +**TypeScript (Frontend):** +- **Naming**: `PascalCase` for components/types, `camelCase` for functions/variables +- **Components**: Prefer function components over classes. Use `.tsx` extension for React components +- **Styling**: TailwindCSS utility classes only. No inline styles or CSS modules +- **State**: Redux Toolkit for global state. Tauri hooks (`useStore`, `useTauriEvent`) for backend integration +- **Error Handling**: Throw errors with descriptive messages. Catch at component boundaries with error states +- **Types**: Explicit return types for complex functions. Use TypeScript strict mode + +**General Patterns:** +- **Version Sync**: `package.json`, `Cargo.toml`, `tauri.conf.json` must stay synchronized. Use `make version V=x.y.z` +- **DICOM UIDs**: Case-sensitive strings. Compare exactly as received without normalization +- **File Paths**: Use Tauri path APIs (`app.resolveResource()`, etc.) for cross-platform compatibility +- **Logging**: Use structured logging in Rust (`tracing` crate). Include context (study UID, AE title, etc.) + +## Agent Guidance + +**Always Do:** +- Run `make test` before committing Rust code changes +- Validate DICOM protocol changes against Orthanc test PACS (`docker-compose up`) or DCMTK tools +- Update version atomically with `make version V=x.y.z` when incrementing releases +- Use Tauri's async `invoke()` for all frontend-to-backend calls (returns Promises) +- Check error logs in `~/.aurabox/bounce/logs/` when debugging backend issues +- Add structured logging with context for new backend features +- Batch SQLite transactions to avoid blocking the DICOM receiver thread +- Handle TUS upload resumption from last chunk offset (not restart) + +**Never Do:** +- Commit secrets, API keys, or credentials (even in examples) +- Use `unwrap()` in production Rust code without explicit safety justification +- Hardcode file paths with platform-specific separators (use Tauri path APIs) +- Assume PACS implementations strictly follow DICOM standard (handle malformed data gracefully) +- Break version synchronization across `package.json`, `Cargo.toml`, `tauri.conf.json` +- Delete database files or study directories without user confirmation +- Modify the proprietary license (UNLICENSED) without authorization + +**Ask Before:** +- Deleting files from disk (studies, database, logs) +- Changing default DICOM network parameters (port 104, AE title "BOUNCE") +- Adding new npm or cargo dependencies (check license compatibility with proprietary project) +- Modifying database schema (requires migration strategy) +- Changing TUS upload chunking behavior (affects resumability) +- Altering DICOM metadata extraction logic (medical imaging compliance) + +**Testing Protocol:** +- DICOM features: Test with Orthanc PACS (port 4242) or `storescu`/`echoscu` from DCMTK +- Frontend changes: Verify in all three platforms (Windows, macOS, Linux) if possible +- Upload pipeline: Test with large studies (>100MB) to validate resumability +- Error scenarios: Test network failures, invalid DICOM files, API key rejection, disk full + +**Documentation Updates:** +- Update `docs/API.md` when adding/modifying Tauri commands or events +- Update `CHANGELOG.md` with user-facing changes (features, fixes, breaking changes) +- Add inline code comments for complex DICOM protocol handling or medical imaging logic +- Update `docs/CONFIGURATION.md` when adding new user settings \ No newline at end of file diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..f380c8f --- /dev/null +++ b/LICENSE @@ -0,0 +1,12 @@ +BOUNCE END USER LICENSE AGREEMENT + +Copyright (c) 2025 Aurabox Pty Ltd + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to download, install, and use the Software for personal or internal business purposes. + +RESTRICTIONS: +1. You may NOT fork, modify, reverse engineer, or create derivative works of the Software. +2. You may NOT sell, resell, rent, lease, or distribute the Software for a fee. +3. You may NOT redistribute the source code of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/Makefile b/Makefile new file mode 100644 index 0000000..23d3617 --- /dev/null +++ b/Makefile @@ -0,0 +1,178 @@ +.PHONY: dev dev-frontend dev-backend build build-frontend build-release \ + test test-rust lint lint-frontend lint-rust fmt fmt-rust \ + clean clean-frontend clean-rust clean-all \ + install check version dicom-echo dicom-send help + +# Default target +.DEFAULT_GOAL := help + +# ------------------------------------------------------------------- +# Development +# ------------------------------------------------------------------- + +## Start the full Tauri application with hot-reload (frontend + backend) +dev: + npm run tauri:dev + +## Start the Next.js frontend dev server only +dev-frontend: + npm run dev + +## Run the Rust backend only (without Tauri window manager) +dev-backend: + cargo run --manifest-path=src-tauri/Cargo.toml + +# ------------------------------------------------------------------- +# Building +# ------------------------------------------------------------------- + +## Build the Next.js frontend to static files +build-frontend: + npm run build + +## Build the complete Tauri application for release +build-release: + npm run tauri:build + +## Alias: build frontend then full Tauri app +build: build-frontend build-release + +# ------------------------------------------------------------------- +# Testing +# ------------------------------------------------------------------- + +## Run Rust unit tests +test-rust: + cd src-tauri && cargo test + +## Run all tests (currently Rust only) +test: test-rust + +# ------------------------------------------------------------------- +# Linting & Formatting +# ------------------------------------------------------------------- + +## Lint the Next.js frontend (ESLint) +lint-frontend: + npm run lint + +## Lint the Rust backend (Clippy) +lint-rust: + cd src-tauri && cargo clippy -- -D warnings + +## Run all linters +lint: lint-frontend lint-rust + +## Format Rust code +fmt-rust: + cd src-tauri && cargo fmt + +## Check Rust formatting without modifying files +fmt-check: + cd src-tauri && cargo fmt -- --check + +## Format all code +fmt: fmt-rust + +# ------------------------------------------------------------------- +# Cleaning +# ------------------------------------------------------------------- + +## Remove Next.js build output and cache +clean-frontend: + rm -rf .next out + +## Remove Rust build artifacts +clean-rust: + cd src-tauri && cargo clean + +## Remove all build artifacts (frontend + backend + node_modules) +clean-all: clean-frontend clean-rust + rm -rf node_modules + +## Alias for cleaning frontend and backend (keeps node_modules) +clean: clean-frontend clean-rust + +# ------------------------------------------------------------------- +# Setup & Utilities +# ------------------------------------------------------------------- + +## Install all dependencies (Node + Rust) +install: + npm install + cd src-tauri && cargo fetch + +## Run pre-commit checks (lint + format check + tests) +check: lint fmt-check test + +## Update version across package.json, Cargo.toml, and tauri.conf.json +## Usage: make version V=1.2.3 +version: +ifndef V + $(error Usage: make version V=1.2.3) +endif + ./update-version.sh $(V) + +# ------------------------------------------------------------------- +# DICOM Testing +# ------------------------------------------------------------------- + +## Test DICOM connectivity with C-ECHO (requires DCMTK) +## Usage: make dicom-echo [PORT=104] [AE=BOUNCE] +dicom-echo: + echoscu -v -aec $(or $(AE),BOUNCE) localhost $(or $(PORT),104) + +## Send a DICOM file via C-STORE (requires DCMTK) +## Usage: make dicom-send FILE=/path/to/file.dcm [PORT=104] [AE=BOUNCE] +dicom-send: +ifndef FILE + $(error Usage: make dicom-send FILE=/path/to/file.dcm [PORT=104] [AE=BOUNCE]) +endif + storescu -v -aec $(or $(AE),BOUNCE) localhost $(or $(PORT),104) $(FILE) + +# ------------------------------------------------------------------- +# Help +# ------------------------------------------------------------------- + +## Show this help message +help: + @echo "Bounce - DICOM C-STORE Receiver" + @echo "" + @echo "Usage: make " + @echo "" + @echo "Development:" + @echo " dev Start full Tauri app with hot-reload" + @echo " dev-frontend Start Next.js frontend dev server only" + @echo " dev-backend Run Rust backend only" + @echo "" + @echo "Building:" + @echo " build Build frontend and full Tauri app" + @echo " build-frontend Build Next.js frontend only" + @echo " build-release Build complete Tauri app for release" + @echo "" + @echo "Testing:" + @echo " test Run all tests" + @echo " test-rust Run Rust unit tests" + @echo "" + @echo "Linting & Formatting:" + @echo " lint Run all linters" + @echo " lint-frontend Lint frontend (ESLint)" + @echo " lint-rust Lint backend (Clippy)" + @echo " fmt Format all code" + @echo " fmt-rust Format Rust code" + @echo " fmt-check Check Rust formatting" + @echo "" + @echo "Cleaning:" + @echo " clean Remove build artifacts" + @echo " clean-frontend Remove Next.js cache and output" + @echo " clean-rust Remove Rust build artifacts" + @echo " clean-all Remove everything (including node_modules)" + @echo "" + @echo "Utilities:" + @echo " install Install all dependencies" + @echo " check Run lint + format check + tests" + @echo " version V=x.y.z Update version across all configs" + @echo "" + @echo "DICOM Testing (requires DCMTK):" + @echo " dicom-echo Test connectivity" + @echo " dicom-send FILE=path.dcm Send a DICOM file" diff --git a/README.md b/README.md index 6fed7b8..225fb46 100644 --- a/README.md +++ b/README.md @@ -1,75 +1,240 @@ # Bounce -**Bounce** is a lightweight DICOM C-STORE receiver that forwards received files to Aurabox. It is designed to run behind a healthcare provider's firewall and securely forward medical imaging to [Aurabox](https://aurabox.cloud). +
-## Features +**A lightweight DICOM C-STORE receiver that securely forwards medical imaging to Aurabox** -- Accepts inbound DICOM C-STORE requests -- Forwards received DICOM files to the Aurabox backend over HTTPS -- Logs DICOM metadata as part of the forwarding process -- Designed for secure, internal deployments -- Minimal configuration required +[![License: BOUNCE EULA](https://img.shields.io/badge/License-Proprietary-red.svg)](LICENSE) +[![Platform: Windows | macOS | Linux](https://img.shields.io/badge/Platform-Windows%20%7C%20macOS%20%7C%20Linux-blue.svg)]() -## Installation +
-Visit the [Releases page](https://github.com/aurabx/bounce/releases) and download the latest `.tar.gz` or binary appropriate for your platform. +--- + +## 📖 Overview + +**Bounce** is a cross-platform desktop application designed to bridge the gap between on-premises medical imaging equipment and cloud-based DICOM storage. Built with Tauri and Rust, Bounce runs behind healthcare providers' firewalls to receive DICOM files via the C-STORE protocol and securely forward them to [Aurabox](https://aurabox.cloud) over HTTPS. + +### Key Capabilities + +- **DICOM C-STORE Receiver**: Accepts inbound DICOM C-STORE requests from PACS, modalities, and other DICOM sources +- **Secure Cloud Upload**: Forwards received DICOM files to Aurabox backend using TUS protocol over HTTPS +- **Metadata Extraction**: Automatically extracts and logs DICOM metadata (Study UID, Series UID, Patient info, etc.) +- **Local Storage Management**: Temporarily stores DICOM files locally with configurable retention policies +- **Study Aggregation**: Intelligently groups DICOM instances into studies with debouncing logic +- **Compression**: Automatically compresses studies into ZIP archives before upload +- **Desktop UI**: Modern web-based interface for monitoring, configuration, and study management +- **System Tray Integration**: Runs in the background with system tray icon for quick access +- **SQLite Database**: Tracks study status and transmission history +- **Logging**: Comprehensive logging with optional remote logging to Better Stack + +--- + +## 🚀 Quick Start -## Set up +### Installation -Follow the instructions at https://docs.aurabox.cloud/applications/bounce/ to complete the install. +1. Visit the [Releases page](https://github.com/aurabx/bounce/releases) +2. Download the latest installer for your platform: + - **Windows**: `.msi` or `.exe` installer + - **macOS**: `.dmg` disk image + - **Linux**: `.deb`, `.AppImage`, or `.tar.gz` +3. Run the installer and follow the setup wizard +### Configuration -Here is the raw Markdown version of the `README.md`: +1. Launch Bounce application +2. Navigate to Settings +3. Configure the following: + - **API Key**: Your Aurabox API key (required) + - **AE Title**: Application Entity title for DICOM (default: `BOUNCE`) + - **Port**: DICOM receiver port (default: `104`) + - **IP Address**: Network interface to bind to (default: `0.0.0.0`) + - **Storage Location**: Directory for temporary DICOM file storage + - **Delete After Send**: Automatically delete files after successful upload + +4. Click "Start Server" to begin receiving DICOM files + +For detailed setup instructions, visit: https://docs.aurabox.cloud/applications/bounce/ + +--- ## 🧪 Testing -To simulate a C-STORE transfer, use a tool like `storescu` from DCMTK: +### Send Test DICOM Files + +Use `storescu` from [DCMTK](https://dicom.offis.de/dcmtk.php.en) to send test DICOM files: ```bash +# Send a single DICOM file storescu -aec BOUNCE 127.0.0.1 104 /path/to/test.dcm + +# Send multiple files +storescu -aec BOUNCE 127.0.0.1 104 /path/to/dicom/folder/*.dcm + +# Send with verbose output +storescu -v -aec BOUNCE 127.0.0.1 104 /path/to/test.dcm +``` + +### Verify C-ECHO (Connection Test) + +```bash +echoscu -aec BOUNCE 127.0.0.1 104 ``` --- -## 🛠 Developer Information +## 🛠 Development ### Prerequisites -* Rust (latest stable) -* Cargo +- **Node.js** 18+ and npm +- **Rust** (latest stable) and Cargo +- **Tauri CLI** (installed via npm) +- **Platform-specific dependencies**: + - **Linux**: `libssl-dev`, `libsqlite3-dev`, `webkit2gtk-4.1-dev` + - **macOS**: Xcode Command Line Tools + - **Windows**: Visual Studio Build Tools -### Build +### Setup Development Environment -1. Install and run ```bash +# Clone the repository +git clone https://github.com/aurabx/bounce.git +cd bounce + +# Install Node dependencies npm install -npx tauri dev + +# Run in development mode +npm run tauri:dev ``` -### Build the UI +The application will launch with hot-reload enabled for both the frontend and backend. + +### Project Structure -1. Build the UI -```bash -npm run build +``` +bounce/ +├── app/ # Next.js frontend application +│ ├── components/ # React components +│ ├── lib/ # Frontend utilities and helpers +│ ├── logs/ # Logs page +│ ├── settings/ # Settings page +│ ├── studies/ # Studies management page +│ └── tools/ # Tools page +├── src-tauri/ # Rust backend +│ ├── src/ +│ │ ├── aura/ # Aurabox API client +│ │ ├── db/ # SQLite database layer +│ │ ├── receiver/ # DICOM C-STORE receiver +│ │ ├── transmitter/ # Upload/transmission logic +│ │ ├── store/ # Configuration management +│ │ ├── lib/ # Utility modules +│ │ └── main.rs # Application entry point +│ ├── Cargo.toml # Rust dependencies +│ └── tauri.conf.json # Tauri configuration +├── package.json # Node.js dependencies and scripts +└── README.md ``` -2. Run Tauri locally +### Available Commands ```bash -next dev +# Development +npm run dev # Run Next.js dev server only +npm run tauri:dev # Run full Tauri app in dev mode + +# Building +npm run build # Build Next.js frontend +npm run tauri:build # Build Tauri application for release + +# Linting +npm run lint # Run ESLint + +# Version Management +./update-version.sh 1.2.3 # Update version across all config files ``` -3. Or, build the app +### Building for Release ```bash -npm install -npm run tauri dev +# Update version number +./update-version.sh 1.2.3 + +# Build release binaries +npm run tauri:build ``` -### Build for release +Built applications will be in `src-tauri/target/release/bundle/` + +--- -1. Update the version number in package.json, Cargo.toml and tauri.conf.json, e.g. +## 📚 Documentation -```bash -./update-version.sh 1.0.0 -``` +- **[Architecture Overview](./docs/ARCHITECTURE.md)** - System design and component interaction +- **[Development Guide](./docs/DEVELOPMENT.md)** - Detailed development setup and guidelines +- **[Configuration Guide](./docs/CONFIGURATION.md)** - Configuration options and settings +- **[API Reference](./docs/API.md)** - Tauri commands and API documentation +- **[User Documentation](https://docs.aurabox.cloud/applications/bounce/)** - End-user guide + +--- + +## 🔐 Security + +Bounce is designed for secure deployments in healthcare environments: + +- All uploads to Aurabox use HTTPS with TLS 1.2+ +- API key authentication for all cloud communications +- Local storage uses filesystem permissions for access control +- No PHI (Protected Health Information) is logged in plain text +- Optional automatic deletion of files after successful upload + +--- + +## 🐛 Troubleshooting + +### DICOM Server Won't Start + +- Check if port 104 is available (may require admin/sudo privileges) +- Verify firewall rules allow inbound connections on configured port +- Check logs in the application's Logs tab + +### Files Not Uploading + +- Verify API key is correctly configured +- Check internet connectivity to Aurabox +- Review upload status in Studies tab +- Check logs for error messages + +### Application Won't Launch + +- Ensure all dependencies are installed +- Check system compatibility (Windows 10+, macOS 10.13+, recent Linux) +- Try running from terminal to see error messages + +--- + +## 📝 License + +See [BOUNCE EULA](LICENSE). + +--- + +## 🙋 Support + +For issues, questions, or feature requests: + +- **Email**: support@aurabox.cloud +- **Documentation**: https://docs.aurabox.cloud +- **GitHub Issues**: https://github.com/aurabx/bounce/issues (for bug reports) + +--- + +## 🙏 Acknowledgments + +Built with: +- [Tauri](https://tauri.app/) - Desktop application framework +- [Next.js](https://nextjs.org/) - React frontend framework +- [Rust DICOM](https://github.com/Enet4/dicom-rs) - DICOM protocol implementation +- [TUS Protocol](https://tus.io/) - Resumable file upload protocol diff --git a/app/components/CurrentStatus.tsx b/app/components/CurrentStatus.tsx index 7b48023..80de9ce 100644 --- a/app/components/CurrentStatus.tsx +++ b/app/components/CurrentStatus.tsx @@ -1,27 +1,25 @@ "use client" -import {classNames} from "@/app/lib/helpers"; -import {useAppSelector} from "@/app/lib/hook"; +import { cn } from "@/app/lib/utils"; +import { useAppSelector } from "@/app/lib/hook"; +import { Button } from "@/app/components/ui/button"; + export default function CurrentStatus() { const running = useAppSelector((state) => state.main.running) - return
- + {running ? "Running" : "Stopped"} +
; -} \ No newline at end of file +} diff --git a/app/components/EventHandler.tsx b/app/components/EventHandler.tsx index 5e816a5..779952f 100644 --- a/app/components/EventHandler.tsx +++ b/app/components/EventHandler.tsx @@ -1,10 +1,11 @@ 'use client' import { useEffect} from 'react' +import { attachLogger, LogLevel } from '@tauri-apps/plugin-log' import {logMessage, setRunning, setCurrentStudies, setRunningDetail, setErrors, setError} from '../lib/store' import {useAppDispatch} from "@/app/lib/hook"; import {listen} from "@tauri-apps/api/event"; -import {CurrentStudies, Study} from "@/app/lib/types"; +import {CurrentStudies} from "@/app/lib/types"; import {invoke} from "@tauri-apps/api/core"; @@ -12,51 +13,94 @@ export default function EventHandler({ children, }: { children: React.ReactNode const dispatch = useAppDispatch() - const bindEvents = async () => { - await listen("log", (event) => { - let action = logMessage(event.payload as string); - dispatch(action) - }) - - await listen("running", (event) => { - let action = setRunning(event.payload as boolean); - dispatch(action) - }) - - await listen("running-details", (event) => { - let action = setRunningDetail(event.payload as string); - dispatch(action) - }) - - await listen('current-studies', (event) => { - let action = setCurrentStudies(event.payload as CurrentStudies); - dispatch(action) - }); - - await listen('error', (event) => { - let action = setError(event.payload as string); - dispatch(action) - - setTimeout(() => { - let action = setErrors([]); + const normalizeLogLevel = (level: LogLevel): 'trace' | 'debug' | 'info' | 'warn' | 'error' => { + switch (level) { + case LogLevel.Trace: + return 'trace' + case LogLevel.Debug: + return 'debug' + case LogLevel.Warn: + return 'warn' + case LogLevel.Error: + return 'error' + case LogLevel.Info: + default: + return 'info' + } + } + + const createLogEntry = ( + message: string, + source: 'event' | 'system', + level: 'trace' | 'debug' | 'info' | 'warn' | 'error' = 'info', + ) => ({ + id: `${source}-${Date.now()}-${Math.random().toString(36).slice(2, 8)}`, + level, + message, + source, + timestamp: new Date().toISOString(), + }) + + useEffect(() => { + let cleanup: (() => void) | undefined + + const bindEvents = async () => { + // Manual emit("log", ...) calls from Rust (server start/stop, association events) + const unlistenLogEvent = await listen("log", (event) => { + dispatch(logMessage(createLogEntry(event.payload as string, 'event', 'info'))) + }) + + // tauri_plugin_log webview target — receives all log::info!/log::error! records + const detachLogger = await attachLogger((entry) => { + dispatch(logMessage(createLogEntry(entry.message, 'system', normalizeLogLevel(entry.level)))) + }) + + const unlistenRunning = await listen("running", (event) => { + let action = setRunning(event.payload as boolean); + dispatch(action) + }) + + const unlistenRunningDetails = await listen("running-details", (event) => { + let action = setRunningDetail(event.payload as string); dispatch(action) - }, 5000) - }); + }) + const unlistenCurrentStudies = await listen('current-studies', (event) => { + let action = setCurrentStudies(event.payload as CurrentStudies); + dispatch(action) + }); - } + const unlistenError = await listen('error', (event) => { + let action = setError(event.payload as string); + dispatch(action) - const initialEvents = async () => { - await invoke('current_studies'); - } + setTimeout(() => { + let action = setErrors([]); + dispatch(action) + }, 5000) + }); + + return () => { + unlistenLogEvent() + detachLogger() + unlistenRunning() + unlistenRunningDetails() + unlistenCurrentStudies() + unlistenError() + } + } - useEffect(() => { bindEvents() - .then(() => { - initialEvents().catch(console.error) + .then(async (dispose) => { + cleanup = dispose + await invoke('current_studies').catch(console.error) }) .catch(console.error); + + return () => { + cleanup?.() + } }, []) return <>{children} -} \ No newline at end of file +} diff --git a/app/components/Fields/SelectInput.tsx b/app/components/Fields/SelectInput.tsx index 21f2b77..76f3e36 100644 --- a/app/components/Fields/SelectInput.tsx +++ b/app/components/Fields/SelectInput.tsx @@ -1,23 +1,27 @@ +import { Label } from "@/app/components/ui/label" +import { cn } from "@/app/lib/utils" + export default function SelectInput(props: { config: { options: { [key: string]: string }; label: string; key: string, help?: string }, settings?: { [p: string]: any } | undefined, value: any, onChange: (e: any) => void }) { - return
-