A FastAPI service for storing and querying directed, predicate-labeled relationships between arbitrary entities — like a lightweight triplestore, without SPARQL.
Relationships are stored in a SQL database (SQLite by default, PostgreSQL for production) and exposed through a GraphQL API built with Strawberry.
This service stories any link with a uri. It is expected to work very well with Tiled.
The data model is simple:
- Entity — a named node with a
typeand optional JSONproperties - Link — a directed edge from a subject entity to an object entity, labelled with a predicate string and optional JSON
properties
Example: (Experiment "SAXS run 42") --[produced]--> (Dataset "raw_001.h5")
Install pixi, then:
pixi install # resolve and install the environment (first time only)
pixi run serve # start the dev server at http://localhost:8080The GraphiQL IDE is available at http://localhost:8080/graphql.
Set SPLASH_LINKS_DB to a file path to persist data across restarts (defaults to links.sqlite when launched via pixi run serve):
SPLASH_LINKS_DB=/data/links.sqlite pixi run servedocker build -t splash-links .
docker run -p 8080:8080 -v $(pwd)/data:/data \
-e SPLASH_LINKS_DB=/data/links.sqlite \
splash-linksOpen http://localhost:8080/graphql in a browser to use the GraphiQL IDE.
mutation {
createEntity(input: { entityType: "Experiment", name: "SAXS run 42", properties: { beamline: "12.3.1" } }) {
id
name
entityType
}
}mutation {
createLink(input: { subjectId: "<experiment-id>", predicate: "produced", objectId: "<dataset-id>" }) {
id
predicate
subject { name }
object { name }
}
}{
entity(id: "<experiment-id>") {
name
outgoingLinks {
predicate
object { name entityType }
}
}
}{
links(predicate: "produced", subjectId: "<experiment-id>") {
id
object { name }
}
}GET /health → {"status": "ok"}
The splash-links CLI lets you view stored data and open a raw SQLite shell without needing to run the server.
All commands read the database from $SPLASH_LINKS_DB (defaulting to links.sqlite in the current directory).
pixi run entities # all entities
pixi run entities -- --type Experiment # filter by type
pixi run entities -- --limit 10 # cap resultspixi run links # all links
pixi run links -- --predicate produced # filter by predicate
pixi run links -- --subject <entity-id> # outgoing from a node
pixi run links -- --object <entity-id> # incoming to a nodepixi run db
# or directly:
splash-links shellThis opens an interactive SQLite prompt. Useful queries:
SELECT * FROM entities;
SELECT * FROM links WHERE predicate = 'produced';
SELECT e.name, l.predicate, o.name AS object
FROM links l
JOIN entities e ON e.id = l.subject_id
JOIN entities o ON o.id = l.object_id;The client API is also available as a Typer CLI for talking to a running splash-links service.
You can run it either through the existing root command:
splash-links client --helpor directly via the standalone script:
splash-links-client --helpSet SPLASH_LINKS_URI (default: splash://localhost:8080) to point at your service.
splash-links client create-entity \
--entity-type Experiment \
--name "SAXS run 42" \
--properties '{"beamline":"12.3.1"}'splash-links client create-link \
<subject-id> produced <object-id> \
--properties '{"confidence":0.99}'splash-links client find-links <entity-id> --predicate produced --limit 20git clone https://github.com/als-computing/splash_links.git
cd splash_links
pixi install
pixi run testTests require ≥ 90% coverage and will fail the build if that threshold is not met.
| Task | Command | Description |
|---|---|---|
serve |
pixi run serve |
Start dev server with auto-reload on port 8080 |
test |
pixi run test |
Run pytest with coverage (≥90% required) |
lint |
pixi run lint |
Ruff lint check |
fmt |
pixi run fmt |
Ruff format |
docs |
pixi run docs |
Serve MkDocs site locally |
entities |
pixi run entities |
List entities in the database |
links |
pixi run links |
List links in the database |
db |
pixi run db |
Open raw SQLite interactive shell |
Pass extra flags after --, e.g. pixi run entities -- --type Experiment --limit 5.
src/splash_links/
__init__.py — package exports
store.py — abstract Store interface + SQLiteStore implementation
schema.py — Strawberry GraphQL types, queries, mutations
app.py — FastAPI app factory (create_app)
main.py — uvicorn entry point
cli.py — Typer CLI (entities, links, shell commands)
_tests/
test_service.py — integration tests (store unit tests + GraphQL HTTP tests)
pixi.toml — environment definition and task shortcuts
pyproject.toml — package metadata, ruff config, coverage config
Dockerfile — two-stage build (pixi build → debian:bookworm-slim runtime)
The service is built on SQLAlchemy 2.x and supports multiple backends. The active backend is selected by the SPLASH_LINKS_DB environment variable, which accepts any SQLAlchemy connection URL (or a plain file path / :memory: shorthand for SQLite).
SQLite is the default and the recommended choice for most deployments. It requires no external server, is trivially portable (a single file), and handles the read-heavy workloads typical of this service well.
SPLASH_LINKS_DB=links.sqlite pixi run serve
# or a bare path — the service auto-converts it to sqlite:///…
SPLASH_LINKS_DB=/data/links.sqlite pixi run serveUse PostgreSQL when you need concurrent writes, role-based access control, or want to run the service behind a load balancer. A docker-compose.yml is provided that starts a Postgres instance alongside the application:
docker compose up --buildSupply an explicit URL for an external Postgres cluster:
SPLASH_LINKS_DB="postgresql+psycopg2://user:pass@host/dbname" pixi run serveAlembic migrations are applied automatically on startup regardless of backend.
DuckDB support is experimental and intended for comparing analytical query performance against SQLite, not for production use. It is not covered by the test suite and may have compatibility gaps.
SPLASH_LINKS_DB="duckdb:///links.duckdb" pixi run serve| Backend | Use case | Status |
|---|---|---|
| SQLite | Local / single-user installations | ✅ Recommended |
| PostgreSQL | Production / multi-user deployments | ✅ Supported |
| DuckDB | Analytical performance benchmarking |
GitHub Actions (.github/workflows/build-app.yml) runs lint and tests on every push and pull request to main. The test step enforces the 90% coverage requirement — the build fails if coverage drops below that threshold.