Search and ask questions about your meetings using AI.
After a video call, aftercall automatically saves the summary, the full transcript, and any action items. Then you can ask Claude things like "What did I agree to do for Sarah?" or "What meetings mentioned the budget?" — and get real answers back.
Every to-do from your calls drops into a Notion checklist so nothing falls through the cracks.
Ongoing cost: ~$5/mo for hosting + a fraction of a cent per call for AI processing. Needs a Bluedot account (free trial) to record your calls.
Bluedot call ──▶ Worker ──▶ D1 + Vectorize + Notion ──▶ Claude.ai (MCP)
webhook extract indexed storage you ask questions
flowchart LR
B[Bluedot]
W{{Cloudflare Worker}}
D[(D1)]
V[(Vectorize)]
N[Notion Followups]
C{{Claude.ai}}
U((You))
B -->|webhook| W
W -->|upsert| D
W -->|embed + upsert| V
W -->|create rows| N
U -->|"what did I promise Pierce?"| C
C -->|MCP tools/call| W
W -->|read| D
W -->|semantic search| V
W -->|filter rows| N
W -->|answer| C
C -->|response| U
Every Bluedot recording becomes:
- A D1 row —
transcriptstable, idempotent onvideo_id, holding raw text + structured summary + participants + action items. - Embedded chunks in Cloudflare Vectorize (1536d, cosine) for semantic search.
- A Notion Transcripts row — metadata hub (Date, Participants, Recording URL, link to Bluedot's native summary page). No duplicated summary content — Bluedot's own Notion sync owns that.
- One Followup row per action item in a Notion inbox —
Status = Inbox, linked via aMeetingrelation back to the Transcripts row, ready to triage. - MCP access from Claude.ai — six tools for semantic search, per-call Q&A (RAG), detail lookup, followups, and owner-scoped action items.
Then you ask Claude.ai things like:
- "What did I commit to do for Pierce in the last week?"
- "In my IT hiring call, what starting compensation did we agree on?"
- "Find every action item assigned to Andy since March."
- "Summarize my open Bluedot followups."
- "What calls mention IronRidge?"
| Service | Tier | Purpose |
|---|---|---|
| Cloudflare | Workers free + Vectorize paid (~$5/mo) | Hosting + D1 + Vectorize + KV |
| OpenAI | Pay-as-you-go (~$0.001/call) | Extraction (gpt-5-mini) + embeddings (text-embedding-3-small) |
| Notion + integration | Free | Transcript pages + Followups inbox |
| Bluedot | Trial | Source of meeting recordings |
| GitHub OAuth App | Free | MCP auth (optional — only if connecting Claude.ai) |
| Claude.ai | Free/paid | MCP client (optional) |
git clone https://github.com/jchu96/aftercall.git
cd aftercall
npm install
npx wrangler login
npm run setupThe script walks 10 interactive steps:
- Verify
wranglerauth - Provision D1 database (idempotent — reuses existing)
- Provision Vectorize index
- Write bindings to
wrangler.toml+ apply D1 migrations - Create (or reuse) Notion Followups + Call Transcripts databases
- Validate OpenAI API key
- Write
.dev.vars+ updatewrangler.tomlvars - Register GitHub OAuth App (opens browser, auto-detects your worker URL) + configure MCP KV + allowlist
- Push every secret to Cloudflare (no manual
wrangler secret putdance) wrangler deploy
Re-runs are safe — the script detects existing values and offers to reuse them instead of recreating.
After deploy, the script prints your worker URL. In Bluedot:
- Settings → Webhooks → Add endpoint
- URL:
https://<your-worker>.workers.dev/ - Events:
meeting.transcript.created,meeting.summary.created
Bluedot shows a signing secret. Save it:
npx wrangler secret put BLUEDOT_WEBHOOK_SECRET- Claude.ai → Settings → Connectors → add a custom MCP server
- URL:
https://<your-worker>.workers.dev/mcp - Sign in with GitHub; the Worker checks your username against
ALLOWED_USERSand hands Claude a bearer token
Done. Ask Claude about your calls.
Full OAuth + Bluedot + debug walkthrough: docs/auth.md.
| Tool | What it does |
|---|---|
search_calls(query, limit?) |
Semantic search over all transcripts via OpenAI embeddings + Vectorize |
get_call(video_id) |
Full details of one call: summary, participants, action items |
answer_from_transcript(video_id, question) |
RAG over a single call — drill-down Q&A grounded in that meeting's transcript |
list_followups(status?, source?, limit?) |
Query the Notion Followups DB with select filters |
find_action_items_for(person, since?) |
All action items assigned to a person (substring match on owner) |
recent_calls(days?) |
Last N days of calls, newest first |
Sample prompts per tool: docs/tools.md.
Three flows compose the system. Each has its own diagram and runbook:
- Ingestion — Bluedot webhook → OpenAI extraction + embeddings → D1 + Vectorize + Notion (diagram)
- OAuth — Claude.ai → Worker
/authorize→ GitHub → Worker/auth/github/callback→ Claude.ai (diagram) - Query — Claude.ai → bearer-protected
/mcp→ tool dispatch → storage → response (diagram)
Deep-dive: docs/architecture.md.
| Layer | Tech |
|---|---|
| Webhook + MCP | Cloudflare Workers + Hono |
| Transcript store | Cloudflare D1 (SQLite, Drizzle schema) |
| Embeddings | Cloudflare Vectorize (1536d, cosine, HNSW) |
| LLM | OpenAI — gpt-5-mini (extraction, json_schema) + text-embedding-3-small |
| Output | Notion API (direct fetch — not @notionhq/client) |
| MCP transport | @modelcontextprotocol/sdk Streamable HTTP, stateless mode |
| MCP auth | @cloudflare/workers-oauth-provider + GitHub OAuth |
Not in the stack: Anthropic, Neon, Postgres, Redis, SendGrid, Express. One LLM provider, one Notion integration, one GitHub OAuth App.
npx vitest run # 110 tests, real D1 via miniflare
npx vitest # watch mode
npx tsc --noEmit # typecheckTests cover Svix verification, payload normalization, OpenAI extraction (mocked), embeddings chunking, D1 idempotency with a concurrent-retry race test, Vectorize upsert, Notion page builders, the GitHub OAuth handler + allowlist, the full OAuth provider integration (well-known metadata, WWW-Authenticate on 401, bearer revocation), and every MCP tool's business logic against real D1 + mocked OpenAI/Vectorize/Notion.
End-to-end MCP transport (tools/list, tools/call through Streamable HTTP) is validated by live Claude.ai smoke tests — vitest-pool-workers' ESM shim can't resolve the SDK's ajv JSON import.
| Task | Command |
|---|---|
| Tail live logs | npx wrangler tail |
| Deploy | npx wrangler deploy |
| Inspect D1 | npx wrangler d1 execute aftercall-db --remote --command "SELECT ..." |
| Query Vectorize | npx wrangler vectorize get-by-ids aftercall-vectors --ids "1-0,1-1" |
| Reprocess a call | DELETE FROM transcripts WHERE video_id = '...' then refire Bluedot webhook |
| List KV (OAuth) | npx wrangler kv key list --binding OAUTH_KV |
| Revoke your bearer | curl -X POST https://.../auth/revoke -H "Authorization: Bearer <token>" |
Full runbook: docs/architecture.md#operational-runbook.
Full reference (click to expand)
Set via wrangler.toml [vars] for non-secret config and wrangler secret put for secrets. .dev.vars mirrors secrets for local wrangler dev.
| Variable | Required | Notes |
|---|---|---|
OPENAI_API_KEY |
yes | Extraction + embeddings |
NOTION_INTEGRATION_KEY |
yes | Notion integration token (ntn_...) |
BLUEDOT_WEBHOOK_SECRET |
yes | Svix signing secret from Bluedot's webhook config |
GITHUB_CLIENT_ID |
MCP only | GitHub OAuth App client id |
GITHUB_CLIENT_SECRET |
MCP only | GitHub OAuth App client secret |
SENTRY_DSN |
optional | Enables error tracking + pipeline tracing. Leave unset to disable Sentry entirely. |
| Variable | Default | Notes |
|---|---|---|
OPENAI_EXTRACTION_MODEL |
gpt-5-mini |
Override to upgrade |
NOTION_TRANSCRIPTS_DATA_SOURCE_ID |
— | Set by setup script |
NOTION_FOLLOWUPS_DATA_SOURCE_ID |
— | Set by setup script |
BASE_URL |
— | Public worker origin; used for OAuth callback URL construction |
ALLOWED_USERS |
— | Comma-separated GitHub usernames allowed via MCP |
SENTRY_ENVIRONMENT |
production |
Only read when SENTRY_DSN is set |
SENTRY_RELEASE |
auto | Injected by npm run deploy when Sentry is configured |
| Binding | Type | Purpose |
|---|---|---|
DB |
D1 | Transcripts |
VECTORIZE |
Vectorize | Chunk embeddings |
OAUTH_KV |
KV | OAuth state + tokens (MCP only) |
| Flag | Why |
|---|---|
nodejs_compat |
Bluedot's Svix package needs Node APIs |
global_fetch_strictly_public |
Required by @cloudflare/workers-oauth-provider |
File-by-file (click to expand)
src/
├── index.ts # Worker entry — re-exports OAuthProvider
├── handler.ts # ingestion pipeline orchestration
├── env.ts # typed Env (D1, Vectorize, KV, secrets)
├── webhook-verify.ts # Svix signature verification
├── bluedot.ts # Bluedot payload normalization
├── extract.ts # OpenAI structured extraction
├── embeddings.ts # OpenAI embeddings + chunking
├── d1.ts # transcripts table writes
├── vectorize.ts # Vectorize upserts
├── notion.ts # Notion API (transcript pages, followups)
├── schema.ts # Drizzle SQLite schema
├── logger.ts # structured JSON logging
└── mcp/
├── index.ts # OAuthProvider wiring + Hono default app
├── handler.ts # /mcp API handler (bearer required)
├── tools.ts # McpServer + Streamable HTTP transport
├── auth/
│ ├── github.ts # /authorize + /auth/github/callback
│ └── allowlist.ts # case-insensitive username check
└── tools/
├── search_calls.ts
├── get_call.ts
├── list_followups.ts
├── find_action_items_for.ts
└── recent_calls.ts
scripts/
├── setup.ts # 10-step interactive provisioning
├── smoke-vectorize.ts # Vectorize round-trip smoke test
└── migrate-from-neon.ts # historical Neon → D1 migration
drizzle/ # numbered SQL migrations
docs/
├── architecture.md # three flows + data model + decisions + runbook
├── tools.md # MCP tool reference with sample prompts
└── auth.md # GitHub OAuth setup + troubleshooting
test/ # vitest helpers + ProvidedEnv typing
The Worker ships with @sentry/cloudflare integration for error tracking + pipeline performance tracing. It's fully optional — leave SENTRY_DSN unset and the SDK is a no-op.
Enable it (your own fork, your own Sentry project):
# 1. Add DSN as a secret
npx wrangler secret put SENTRY_DSN
# 2. (optional) enable source map uploads on deploy
# - export SENTRY_AUTH_TOKEN from https://sentry.io/settings/account/api/auth-tokens/
# - override defaults in scripts/deploy.mjs (SENTRY_ORG, SENTRY_PROJECT envs)
SENTRY_ORG=my-org SENTRY_PROJECT=my-project SENTRY_AUTH_TOKEN=sntrys_... npm run deployWhat's instrumented:
- Pipeline spans —
bluedot.pipeline.{transcript,summary}wrap the webhook handlers, with child spans foropenai.extract,openai.embed,d1.upsert_*,vectorize.upsert,notion.create_transcript_page,notion.create_followup. - Error capture — every pipeline catch site calls
Sentry.captureExceptionwithvideo_id+svix_idtags. Notion failures are non-fatal but still reported. - MCP + OAuth errors —
withSentrywraps the worker entrypoint so any uncaught error in the OAuth or MCP paths auto-reports. - Source maps —
npm run deployuploads them to Sentry automatically when.sentryclircorSENTRY_AUTH_TOKENis present; otherwise the step is skipped with a warning.
Without Sentry configured, npm run deploy just runs wrangler deploy as before.
| Doc | Read when |
|---|---|
docs/architecture.md |
Understanding the system, adding features, debugging |
docs/tools.md |
Using MCP tools from Claude.ai, adding a new tool |
docs/auth.md |
First-time OAuth setup, troubleshooting 401/403/redirect_uri errors |
CHANGELOG.md |
What changed between versions |
CLAUDE.md |
Conventions + do-nots when using Claude Code in this repo |
conductor/ |
Product definition, tech stack, workflow, and track specs — the "why" behind the code |
Not promises — just ideas on the shortlist. File an issue if you want to prioritize one.
delete_call(video_id)MCP tool — let Claude.ai prune unwanted calls end-to-end:- Delete the D1
transcriptsrow - Delete the matching Vectorize chunks (by deterministic
{id}-{chunk}IDs) - Archive the aftercall Transcripts page + every Followup row related via the
Meetingrelation (unlink before archive, or Notion will orphan the relation) - Leave Bluedot's native Notion summary page alone — it's Bluedot's surface, not ours
- Needs a confirmation / dry-run guard before landing, since the tool is destructive and Claude can call it without a human in the loop.
- Delete the D1
MIT — see LICENSE.