A quality-gated V4V music index with a source-first v1 data model.
- quick start and architecture: this README
- narrative introduction: docs/user-guide.md
- operator deployment and maintenance: docs/operations.md
- HTTP API reference: docs/API.md
- runtime API explorer:
GET /api(backed byGET /openapi.json) - schema reference: docs/schema-reference.md
- verifier behavior: docs/verifier-guide.md
- source-first music schema ADR: docs/adr/0034-adopt-rebuild-first-source-first-v1-music-schema.md
Stophammer accepts and indexes RSS feeds that:
- declare
podcast:medium=music - carry at least one structurally valid
podcast:valueroute
It also preserves two container/source-layer feed kinds:
publisherfeedsmusicLfeeds
The public v1 model is source-first:
feedsare the release-shaped rowstracksare the track-shaped rows- source claims, links, IDs, remote items, enclosure variants, and transcripts are preserved
- the old canonical release/recording public API is retired
Publisher handling is intentionally strict:
publishermeans publisher by default- non-Wavlake publisher text is only promoted from
podcast:remoteItemlinks when the publisher/music relation is reciprocal - Wavlake is a narrow compatibility exception where the linked publisher feed may supply artist text for the music feed while the stored publisher remains
"Wavlake" - track rows inherit that same publisher truth in
tracks.publisher
[crawler] -> POST /ingest/feed -> [primary] -> POST /sync/push -> [community nodes]
^
verifier chain
signs events
- Primary nodes ingest feeds, run verifiers, write SQLite, and sign events.
- Community nodes replicate the signed event log and serve read APIs.
- Crawlers are external untrusted processes. This repo does not schedule them.
The crawler runtime and importer live in the separate stophammer-crawler package directory, which is also published as its own release artifact.
cargo build --releaseCommon local runs:
./target/release/stophammer
NODE_MODE=community ./target/release/stophammerUseful checks:
cargo check
cargo test
cargo clippy -- -D warnings
cargo fmt --checkBuild the main images:
docker buildx build --load --target stophammer-indexer -t stophammer-indexer .
docker buildx build --load --target stophammer-node -t stophammer-node .If buildx is unavailable:
docker build --target stophammer-indexer -t stophammer-indexer .
docker build --target stophammer-node -t stophammer-node .The root docker-compose.yml defines the current packaged stack:
primarypodping-listenergossipimportimport-wavlakestophammer-crawler(tools profile, one-shot feed crawl)
Copy the sample env files you actually use:
cp packaging/env/primary.compose.env.example packaging/env/primary.compose.env
cp packaging/env/podping-listener.compose.env.example packaging/env/podping-listener.compose.env
cp packaging/env/crawler-feed.compose.env.example packaging/env/crawler-feed.compose.env
cp packaging/env/crawler-gossip.compose.env.example packaging/env/crawler-gossip.compose.env
cp packaging/env/crawler-import.compose.env.example packaging/env/crawler-import.compose.env
cp packaging/env/crawler-import-wavlake.compose.env.example packaging/env/crawler-import-wavlake.compose.envPrimary configuration usually needs:
CRAWL_TOKEN=change-me
SYNC_TOKEN=change-me-too
ADMIN_TOKEN=optional-admin-tokenStart the primary:
docker compose up -d --build primaryOptional bundled crawler services:
docker compose up -d podping-listener gossip
docker compose run --rm import
docker compose run --rm import-wavlake
docker compose --profile tools run --rm stophammer-crawler feed https://example.com/feed.xmlIf you are updating an older resolver-era deployment, use:
docker compose up -d --build --remove-orphansThat removes the retired resolverd container after the VPS has the updated repo state.
Keep these paths on persistent storage:
DB_PATHforstophammer.dbKEY_PATHfor the node signing key
Do not discard the signing key unless you intend to rotate node identity. Community nodes verify pushed events against that key.
Community nodes:
- do not ingest feeds
- do not run verifiers
- do not sign feed events
- do not run a resolver worker
They replicate signed events from the primary and serve the read API from local state.
Public source-first reads:
GET /v1/feeds/{guid}GET /v1/tracks/{guid}GET /v1/feeds/recentGET /v1/searchGET /v1/publishersGET /v1/publishers/{publisher}GET /v1/node/capabilitiesGET /v1/peers
Useful provenance/debug includes on feed reads:
remote_itemspublishersource_linkssource_idssource_contributorssource_platformssource_release_claims
See docs/API.md for exact payloads.