Signed evidence for tool-using AI.
Assay is an evidence compiler for AI execution. It turns AI runs into signed proof packs another team can verify offline — no vendor server required. Its job is narrow: make post-run tampering visible, preserve honest failures, and let verification happen without trusting the operator.
Agents talk via MCP. Agents prove via Assay.
pip install assay-ai
assay try-mcpThree MCP tool calls. Receipted. Signed. Verified. 30 seconds. No API key.
Verify a packet in your browser — no install, no account. Drag in a proof pack, see the result.
See the before/after specimen — when a decision is disputed, the difference between reconstruction and verification.
| Exit | State | Meaning |
|---|---|---|
0 |
pass | Authentic evidence, standards met |
1 |
honest fail | Authentic evidence, standards not met |
2 |
tampered | Evidence altered after signing |
A signed failure is stronger evidence than a vague pass.
- Tamper detection — every byte of evidence is fingerprinted and signed; post-run edits are visible
- Honest failure retention — a failed run stays failed; evidence cannot be quietly upgraded to a pass
- Offline verification — another team verifies without calling your server or trusting your logs
proof_pack/
receipt_pack.jsonl # One receipt per tool call, model invocation, or policy check
pack_manifest.json # SHA-256 hashes of every file + Ed25519 signature
pack_signature.sig # Raw signature bytes
verify_report.json # Machine-readable verification result
verify_transcript.md # Human-readable transcript
Change one byte in any file. Verification fails.
assay try # General proof pack demo (15 seconds)
assay demo-challenge # Good pack + tampered pack side by side
assay start # Guided setup for your projectassay scan . --report # Find uninstrumented LLM call sites
assay patch . # Auto-insert SDK integration
assay run -- python my.py # Run + build signed proof pack
assay verify-pack ./proof_pack_*/ # Verify offlineBoundary: Assay proves the evidence artifact has not been altered after signing. It does not prove the signer was an authorized signer for this evidence — at T0, structural cryptographic validity is confirmed, not signer authority. Stronger signer-trust guarantees require a higher trust tier with externally controlled keys and/or external anchors. It does not prove every upstream input was authentic. Trust tiers · What Assay does today.
Why this exists
We scanned 30 AI projects and found 231 high-confidence LLM call sites. None had Assay-compatible tamper-evident instrumentation. (Many have logging — this measures cryptographic evidence specifically.) Full results.
Adversarial testing: 16 deterministic tampering attacks, all caught, zero false passes. Full report.
Self-scan note: Running
assay score .on this repo returns a low score. That is expected — Assay instruments AI workflows, not itself. This repo is the instrument, not the subject.
Why now: EU AI Act Article 12 requires automatic logging for high-risk AI systems; Article 19 requires providers to retain automatically generated logs for at least 6 months. High-risk obligations apply from 2 Aug 2026 (Annex III) and 2 Aug 2027 (regulated products). SOC 2 CC7.2 requires monitoring of system components and analysis of anomalies as security events. "We have logs on our server" is not independently verifiable evidence. Assay produces evidence that is. See compliance citations for exact references.
For the ecosystem map, see docs/REPO_MAP.md.
See it — then understand it (demo-challenge, demo-incident, honest failure)
assay try (above) gives you the 15-second version. For the full specimen
with file output and manual verification, use the challenge demo:
assay demo-challenge # creates challenge_pack/ with good + tampered packsTwo packs, one byte changed ("gpt-4" -> "gpt-5" in the receipts). Here's what happens (pack IDs and timestamps will differ on your machine):
$ assay verify-pack challenge_pack/good/
VERIFICATION PASSED
Pack ID: pack_20260222_ca2bb665
Integrity: PASS
Claims: PASS
Receipts: 3
Signature: Ed25519 valid
Exit code: 0
$ assay verify-pack challenge_pack/tampered/
VERIFICATION FAILED
Pack ID: pack_20260222_ca2bb665
Integrity: FAIL
Error: Hash mismatch for receipt_pack.jsonl
Exit code: 2
One byte changed. Verification fails. No server access needed. Verification is pure math — no accounts, no infrastructure.
Now try the policy violation demo:
assay demo-incident # two-act scenario: honest PASS vs honest FAIL Act 1: Agent uses gpt-4 with guardian check
Integrity: PASS Claims: PASS Exit code: 0
Act 2: Someone swaps model to gpt-3.5-turbo, removes guardian
Integrity: PASS Claims: FAIL Exit code: 1
Act 2 is an honest failure -- authentic evidence proving the run violated its declared standards. The evidence is real. The failure is real. Nobody can edit the history. Exit code 1.
Honest failure is a feature, not an embarrassment. Exit 1 is audit gold: a control failed, the failure is detectable and retained, and the evidence is authentic. A signed failure is stronger evidence than a vague pass. Auditors, regulators, and buyers trust systems that can show what went wrong -- not systems that only ever claim success.
Assay separates two questions on purpose:
- Integrity: "Were these bytes tampered with after creation?" (signatures, hashes, required files)
- Claims: "Does this evidence satisfy our declared governance checks?" (receipt types, counts, field values)
| Integrity | Claims | Exit | Meaning |
|---|---|---|---|
| PASS | PASS | 0 | Evidence checks out, declared standards pass |
| PASS | FAIL | 1 | Honest failure: authentic evidence of a standards violation |
| FAIL | -- | 2 | Tampered evidence |
| -- | -- | 3 | Bad input (missing files, invalid arguments) |
The split is the point. Systems that can prove they failed honestly are more trustworthy than systems that always claim to pass.
With real calls: assay scan . finds your actual OpenAI / Anthropic / Gemini / LiteLLM / LangChain call sites. assay patch . instruments them. Every real LLM call emits a signed receipt. The demos above use synthetic data so you can see verification without configuring anything.
Installing Assay gives you the CLI, receipt store, and proof-pack builder. It does not automatically record your app.
Receipts are emitted only when your runtime is instrumented:
assay patch .inserts the right Assay integration for supported SDKspatch()wrappers emit receipts when model calls happenAssayCallbackHandler()does the same for LangChain callback flowsemit_receipt(...)lets you record events manually in any stack
assay run -- <your command> then does three things:
- creates a trace id
- runs your app with
ASSAY_TRACE_IDin the environment - packages any emitted receipts into
proof_pack_<trace_id>/
The result is a signed, offline-verifiable artifact:
app execution
-> instrumented SDK or emit_receipt(...)
-> receipts written to ~/.assay/...
-> assay run packages them into proof_pack_<trace_id>/
-> assay verify-pack checks the artifact offline
What becomes harder to fake
Assay is not a truth oracle. It is an evidence-hardening layer.
| If someone tries to... | Without Assay | With Assay |
|---|---|---|
| Edit evidence after a run | Hard to notice | Verification fails |
| Drop or weaken locked checks | Easy to hide | Lock mismatch exposes it |
| Omit covered call sites | Easy to hand-wave | Completeness checks catch it |
| Hand buyer internal logs, ask for trust | Buyer must trust the operator | Buyer verifies offline |
| Fabricate a complete run from scratch | Possible | Still possible at base tier; stronger deployment raises the cost |
Why there is no quiet edit. Every file in a proof pack is fingerprinted. The fingerprints are recorded in a manifest. The manifest is digitally signed. Change a file -- the fingerprint won't match. Fix the manifest to cover it -- the signature breaks. Re-sign the manifest -- the signer identity changes. Every path to tampering leaves a visible trace.
Assay proves the evidence artifact has not been quietly changed after the fact. It does not, by itself, prove every upstream component was honest.
Deployment ladder -- start at Base, strengthen as your trust requirements grow:
- Base -- self-signed artifact, offline-verifiable, tamper-evident
- Hardened -- CI-held signing key + branch protection (separates signer from developer)
- Anchored -- transparency ledger + external timestamping (RFC 3161)
Completeness is enforced relative to call sites enumerated by the scanner and/or declared by policy. Undetected call sites are a known residual risk, reduced via multi-detector scanning and CI gating.
Assay doesn't make fraud impossible -- it makes fraud expensive, fragile, and much easier to catch.
Three operating modes (wrapper, runtime SDK, settlement)
Assay operates in three modes depending on your runtime shape.
For scripts, CI jobs, and bounded workflows. Lowest friction.
assay run -c receipt_completeness -- python my_app.py
assay verify-pack ./proof_pack_*/One process, one proof pack, one verification.
For services, agents, and long-lived processes. Emit receipts during execution, seal checkpoints at review boundaries.
import assay
with assay.open_episode(policy_version="v2.1") as episode:
episode.emit("model.invoked", {"model": "gpt-4", "tokens": 800})
episode.emit("tool.invoked", {"tool": "knowledge_base"})
checkpoint = episode.seal_checkpoint(reason="before_send_reply")
verdict = assay.verify_checkpoint(checkpoint)
# verdict.ok / verdict.honest_fail / verdict.tamperedThe episode is the primary unit, not the Unix process. See SDK docs for verdict handling.
For high-consequence actions. Evidence posture must verify before the world changes.
checkpoint = episode.seal_checkpoint(reason="before_payout")
verdict = assay.verify_checkpoint(checkpoint)
if verdict.ok:
execute_payout()
# If not ok: honest_fail → escalate; tampered → alert. Do not proceed.verify_checkpoint() is the gate. See Decision Escrow for the protocol model.
Full project setup (scan, patch, run, verify, CI gate)
# 1. Find uninstrumented LLM calls
assay scan . --report
# 2. Patch (one line per SDK, or auto-patch all)
assay patch .
# 3. Run + build a signed evidence pack
# -c receipt_completeness runs the built-in completeness check (see `assay cards list` for all options)
# everything after -- is your normal run command
assay run -c receipt_completeness -- python my_app.py
# 4. Verify (CLI or browser — no install needed for browser)
assay verify-pack ./proof_pack_*/
# Or verify in your browser: https://haserjian.github.io/assay-proof-gallery/verify.html
# 5. Generate report artifacts for security/compliance review
assay report . -o evidence_report.html --sarif
# 6. Optional: set and enforce score gates in CI
assay gate save-baseline
assay gate check . --min-score 60 --fail-on-regressionCommand discovery:
assay --helpshows common entry points;gate,report,ci,diff,analyze,lock, andvendorqare available but unlisted by default. Runassay <command> --helpfor full options.
Scanner covers OpenAI, Anthropic, Gemini, LiteLLM, LangChain via AST — dynamic dispatch and raw HTTP are not. See SCANNER_LIMITATIONS.md.
Local models: Any OpenAI-compatible server (Ollama, LM Studio, vLLM,
llama.cpp) works automatically -- Assay patches the OpenAI SDK at the class
level, so OpenAI(base_url="http://localhost:11434/v1") emits receipts like
any other provider. LiteLLM users get the same coverage via the LiteLLM
integration (ollama/llama3, etc.).
Fastest path (recommended):
assay ci init github --run-command "python my_app.py" --min-score 60This generates a 3-job GitHub Actions workflow:
assay-gate(score enforcement, regression checks, JSON gate report artifact)assay-verify(proof pack generation + cryptographic verification)assay-report(HTML evidence report artifact + SARIF upload)
For the manual path (lockfile, gate flags, daily diff/analyze), see Start Here.
Evidence compiler model
Assay is an evidence compiler for AI execution. If you've used a build system, you already know the mental model:
| Concept | Build System | Assay |
|---|---|---|
| Source | .c / .ts files |
Receipts (one per LLM call) |
| Artifact | Binary / bundle | Evidence pack (5 files, 1 signature) |
| Tests | Unit / integration tests | Verification (integrity + claims) |
| Lock | package-lock.json |
assay.lock |
| Gate | CI deploy check | CI evidence gate |
Full command reference
The core path is 6 commands:
assay try # 60-second demo (sign, tamper, catch)
assay scan / assay patch # instrument
assay run # produce evidence
assay verify-pack # verify evidence
assay diff # catch regressions
assay score # evidence readiness (0-100, A-F)
Getting started
| Command | Purpose |
|---|---|
assay try |
60-second demo: sign, tamper, catch |
assay status |
One-screen operational dashboard |
assay start demo|ci|mcp |
Guided entrypoints for trying, CI setup, or MCP auditing |
assay onboard |
Guided setup: doctor -> scan -> first run plan |
assay doctor |
Preflight check: is Assay ready here? |
Instrument + produce evidence
| Command | Purpose |
|---|---|
assay scan |
Find uninstrumented LLM call sites (--report for HTML) |
assay patch |
Auto-insert SDK integration patches into your entrypoint |
assay run |
Wrap command, collect receipts, build signed evidence pack |
Verify + analyze
| Command | Purpose |
|---|---|
assay verify-pack |
Verify integrity + claims (the 4 exit codes) |
assay explain |
Plain-English summary of an evidence pack |
assay analyze |
Cost, latency, error breakdown from pack or --history |
assay diff |
Compare packs: claims, cost, latency (--against-previous, --why) |
assay score |
Evidence Readiness Score (0-100, A-F) with anti-gaming caps |
CI
| Command | Purpose |
|---|---|
assay ci init github |
Generate a GitHub Actions workflow |
assay flow try|adopt|ci|mcp|audit |
Guided workflow executor (--apply to execute) |
MCP + policy
| Command | Purpose |
|---|---|
assay mcp-proxy |
Transparent MCP proxy: intercept tool calls, emit receipts |
assay mcp policy init |
Generate a starter MCP policy YAML file |
assay mcp policy validate |
Validate a policy file against the schema |
assay policy impact |
Analyze policy impact on existing evidence |
Full reference (key management, lockfile, pack management, incident forensics): docs/README_quickstart.md
Advanced: VendorQ (verifiable vendor questionnaires)
Enterprise customers ask AI governance questions in security questionnaires. VendorQ compiles evidence-backed answer packets from Assay proof packs. Every answer traces to a signed receipt. Every modification is detectable.
For the buyer-facing wrapper around that proof material, see docs/reviewer-packets.md.
Quick path:
assay vendorq ingest --in questionnaire.csv --out .assay/vendorq/questions.json
assay vendorq compile --questions .assay/vendorq/questions.json --pack ./proof_pack_* --policy conservative --out .assay/vendorq/answers.json
assay vendorq export-reviewer --proof-pack ./proof_pack_* --out reviewer_packet
assay reviewer verify reviewer_packet
assay reviewer census reviewer_packetUse VendorQ when the pain is: "we have to answer AI-governance questions and we cannot hand the reviewer a verifiable artifact."
# Ingest a questionnaire, compile answers against evidence, lock, verify
assay vendorq ingest --in questionnaire.csv --out questions.json
assay vendorq compile --questions questions.json --pack ./proof_pack --out answers.json
assay vendorq lock write --answers answers.json --pack ./proof_pack --out vendorq.lock
assay vendorq verify --answers answers.json --pack ./proof_pack --lock vendorq.lock --strict10 deterministic verification rules. Tamper one answer and verification fails with exit code 2. The packet is forwardable to your customer's security team — they verify it offline with a public key.
See it live: Proof Gallery — three real proof packs demonstrating pass, honest fail, and tamper detection. All three are independently verifiable without any account or API key.
Adversarial testing: 16 attack scenarios, 16 catches, 0 false passes.
Advanced: Reviewer packets (cross-boundary evidence handoff)
A reviewer-ready evidence packet is the buyer-facing wrapper around a signed proof pack. Assay produces the proof pack. The evidence packet makes that proof usable across an organizational boundary: scope, coverage, review state, and the nested proof-pack verification path in one forwardable artifact.
# Compile a reviewer packet from a proof pack plus declarative packet inputs
assay vendorq export-reviewer \
--proof-pack tests/fixtures/reviewer_packet/sample_proof_pack \
--boundary tests/fixtures/reviewer_packet/sample_boundary.json \
--mapping tests/fixtures/reviewer_packet/sample_mapping.json \
--out reviewer_packet_demo
# Verify the reviewer packet and derive the settlement
assay reviewer verify reviewer_packet_demo
assay reviewer verify reviewer_packet_demo --json
# Generate a Decision Census report from the compiled reviewer packet
assay reviewer census reviewer_packet_demo
assay reviewer census reviewer_packet_demo --jsonCanonical handoff flow:
proof pack -> reviewer packet -> assay reviewer verify -> browser verify
Buyer verdicts and CLI exit codes are different layers:
- Buyer verdicts: VERIFIED, VERIFIED_WITH_GAPS, INCOMPLETE_EVIDENCE, EVIDENCE_REGRESSION, TAMPERED, OUT_OF_SCOPE
- CLI exit codes: 0/1/2/3 for PASS, HONEST_FAIL, TAMPERED, and bad input
Use the proof pack when you need cryptographic verification. Use the evidence packet when another team needs a bounded artifact they can inspect, forward, and challenge.
Verify online: Browser verifier — drop in a proof pack or reviewer packet and check it client-side.
Advanced: Passports (portable signed evidence credentials)
A passport is a signed, content-addressed JSON object that summarizes what was verified about an AI system: claims, coverage, reliance class, and a validity window. Built from proof pack evidence, not asserted by hand.
Try the seeded lifecycle demo (no API key, no repo context needed):
pip install assay-ai
assay passport demoThe demo intentionally starts with a weak passport, then challenges and supersedes it. The initial X-Ray grade (D) is part of the lifecycle, not a product failure.
12 commands (assay passport --help). The 6 you'll use most:
| Command | Question |
|---|---|
verify |
Is this artifact authentic and untampered? |
status |
Should I rely on it under my policy? (PASS/WARN/FAIL) |
xray |
How strong is the evidence posture? (A-F grade) |
challenge |
Record a governance objection against a passport |
supersede |
Link the old passport to an improved successor |
diff |
What changed between two passport versions? |
Also: mint, sign, show, render, revoke, demo.
Full command set:
# Mint a passport from a proof pack, sign it, verify it
assay passport mint --pack ./proof_pack/ --subject-name "MyApp" \
--system-id "my.app.v1" --owner "My Org" --output passport.json
assay passport sign passport.json
assay passport verify passport.json
# Check reliance posture under a policy mode
assay passport status passport.json --mode buyer-safe --json
# X-Ray diagnostic: structural grade (A-F) and improvement path
assay passport xray passport.json --report xray.html
# Lifecycle governance (all cryptographically signed)
assay passport challenge passport.json --reason "Missing coverage"
assay passport supersede old.json new.json --reason "Addressed gap"
assay passport diff old.json new.json --report diff.htmlWorked example: Seeded referee gallery —
pre-built signed passports, governance receipts, X-Ray diagnostic, and
trust diff. All artifacts are regenerable via
python3 docs/passport/generate_gallery.py.
Deeper docs: Passport guide | Verification ritual | Gallery manifest
What this proves today:
- Signed, content-addressed passport artifacts with Ed25519 signatures
- Deterministic lifecycle governance: challenge, supersede, revoke, diff
- Reproducible worked examples on seeded reference artifacts
- Offline verification without network access
What is future scope:
- Arbitrary external trust-surface scanning (URLs, PDFs, vendor pages)
- Minting from external vendor documents (currently proof-pack only)
- Generalized trust analysis across messy real-world inputs
- Enterprise diff workflows (primitive exists, product does not)
Advanced: AI Decision Credentials (ADC)
ADC is a structured schema for packaging AI decision evidence into verifiable, time-bounded credentials. An ADC wraps the proof pack with decision metadata: what was decided, by whom, under what policy, with what evidence, and how long the credential remains valid.
# Verify a pack with expiry enforcement
assay verify-pack ./proof_pack_*/ --check-expiry
# ADC v0.1 schema: 35 properties, 17 required, additionalProperties: false
# Schema: src/assay/schemas/adc_v0.1.schema.jsonThe conformance corpus includes 10 canonical packs (including stale_01
for expired credentials and superseded_01 for replaced decisions).
Install details (Windows, PATH issues, deterministic setup)
# Windows
py -m pip install assay-aiAssay requires Python 3.9+.
If pip is not on your PATH, use python3 -m pip on macOS/Linux or
py -m pip on Windows.
Validation status:
- CI smoke-tests the first CLI path on Linux, macOS, and Windows using
assay versionandassay try. - The deeper SDK compatibility suite currently runs on Ubuntu.
If assay is not recognized after install, open a new terminal first. On
Windows, the usual fix is adding Python's Scripts directory to PATH.
For deterministic environment setup, see docs/START_HERE.md.
Shell completions (bash/zsh/fish/PowerShell):
assay --install-completionRestart your shell after installing. Tab completion works for all commands and options.
- Developers — scan existing code, instrument LLM calls, get a signed artifact per run
- Security and CI teams — gate on evidence quality, fail builds without tamper-evident proof packs
- MCP and agent operators —
assay try-mcp, transparent MCP proxy, per-tool-call receipts - Auditors, compliance teams, and reviewers — offline verification, reviewer packets, HTML evidence reports
- Start Here -- 6 steps from install to evidence in CI
- Evidence Packets -- compile, verify, and hand off reviewer-ready evidence packets
- What Assay Does Today -- the plain-language founder memo
- Boundary Map -- Assay vs VendorQ vs AgentMesh vs Loom/CCIO
- Full Picture -- architecture, trust tiers, repo boundaries, release history
- Quickstart -- install, golden path, command reference
- For Compliance Teams -- what auditors see, evidence artifacts, framework alignment
- Compliance Citations -- exact regulatory references (EU AI Act, SOC 2, ISO 42001)
- Decision Escrow -- protocol model: agent actions don't settle until verified
- Roadmap -- phases, product boundary, execution stack
- Repo Map -- what lives where across the Assay ecosystem
- Pilot Program -- early adopter program details
-
"No receipts emitted" after
assay run: First, check whether your code has call sites:assay scan .-- if scan finds 0 sites, you may not be using a supported SDK yet. Installing Assay alone does not emit receipts; your runtime must be instrumented. If scan finds sites, check: (1) Is# assay:patchedin the file, or did you addpatch()/ a callback? Runassay scan . --reportto see patch status per file. (2) Did you install the SDK extra (python3 -m pip install "assay-ai[openai]")? (3) Didpatch()execute before the first model call? (4) Did you use--before your command (assay run -- python app.py)? Runassay doctorfor a full diagnostic. -
LangChain projects:
assay patchauto-instruments OpenAI and Anthropic SDKs but not LangChain (which uses callbacks, not monkey-patching). For LangChain, addAssayCallbackHandler()to your chain'scallbacksparameter manually. Seesrc/assay/integrations/langchain.pyfor the handler. -
assay run python app.pygives "No command provided": You need the--separator:assay run -c receipt_completeness -- python app.py. Everything after--is passed to the subprocess. -
Quickstart blocked on large directories:
assay quickstartguards against scanning system directories (>10K Python files). Use--forceto bypass:assay quickstart --force. -
macOS:
ModuleNotFoundErrorinsideassay runbut works outside it: On macOS,python3on PATH may point to a different Python version than where assay and your SDK are installed (e.g.python3→ 3.14, but packages are in 3.11). Use a virtual environment (recommended), or specify the exact interpreter:assay run -- python3.11 app.py. Check withpython3 --versionand compare to the Python where you installed Assay.
- Try it:
python3 -m pip install assay-ai && assay try - Questions / feedback: GitHub Discussions
- Bug reports: Issues
- Want this in your stack in 2 weeks? Pilot program -- we instrument your AI workflows, set up CI gates, and hand you a working evidence pipeline. Open a pilot inquiry.
| Repo | Purpose |
|---|---|
| assay | Core CLI, SDK, conformance corpus (this repo) |
| assay-verify-action | GitHub Action for CI verification |
| assay-ledger | Public transparency ledger |
| assay-proof-gallery | Live demo packs (PASS / HONEST FAIL / TAMPERED) |
Apache-2.0