Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
100 changes: 100 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -163,6 +163,22 @@ jobs:
needs: build
runs-on: ubuntu-latest
timeout-minutes: 10
# ST-1: the BlazegraphStore parity gate in
# adapter-parity-extra.test.ts intentionally fails red when
# `BLAZEGRAPH_URL` is missing so a green pass cannot lie about
# parity coverage. We boot a real `lyrasis/blazegraph` service
# container (NanoSparqlServer on :9999) so the parity test
# exercises both adapters against actual engines instead of the
# canned node:http stub that ST-1 documents as misleading.
services:
blazegraph:
image: lyrasis/blazegraph:2.1.5
ports:
# The image declares EXPOSE 9999 but actually starts the
# NanoSparqlServer on Jetty's :8080 (verified via
# `netstat` inside the container). Map host:9999 -> container:8080
# so the rest of this job can keep referring to localhost:9999.
- 9999:8080
steps:
- uses: actions/checkout@v4
- uses: pnpm/action-setup@v4
Expand All @@ -179,9 +195,93 @@ jobs:
- name: Restore build outputs
run: tar -xzf /tmp/build-outputs.tgz

- name: Wait for Blazegraph SPARQL endpoint
# NanoSparqlServer needs ~5-15s on a cold start. Poll the
# default `kb` namespace's SPARQL endpoint with `ASK {}` for
# up to 60s. We use `continue-on-error: true` so a failed
# boot does not turn the whole shard red — instead the
# storage suite below will run, hit the original ST-1 sentinel,
# and surface the missing-engine error in the failure log
# exactly like before. That preserves the "no false positives"
# contract: silently passing without a real engine is impossible.
run: |
set -e
for i in $(seq 1 30); do
if curl -sf "http://localhost:9999/bigdata/namespace/kb/sparql?query=ASK%20%7B%7D" >/dev/null 2>&1; then
echo "blazegraph ready after ${i}s"
exit 0
fi
sleep 2
done
echo "::warning::blazegraph never became ready within 60s; ST-1 parity test will report"
continue-on-error: true

- name: Create Blazegraph quads-mode namespaces
# The `kb` namespace shipped by lyrasis/blazegraph defaults to
# triples mode (quads=false), so any GRAPH <…> { … } clause is
# silently dropped on insert and the conformance suite gets
# surprising results (deleteByPattern over-deletes because all
# triples land in the default graph regardless of intent).
#
# We provision TWO quads-mode namespaces, not one. Vitest runs
# files in parallel by default, and `storage.test.ts` issues
# `DROP ALL` before every Blazegraph conformance test in its
# suite — pointing `adapter-parity-extra.test.ts` at the SAME
# namespace meant the conformance suite's `DROP ALL` could
# fire mid-parity and wipe its inserted fixture, making the
# parity lane flaky.
# Isolating per file (not per worker) is sufficient: vitest
# serialises tests WITHIN a file, so the conformance suite's
# `DROP ALL` between its own tests is fine, and the parity
# suite holds exclusive ownership of its own namespace.
#
# `dkgq` → conformance / general storage suite
# (storage.test.ts and other DROP-ALL tests)
# `dkgq-parity` → adapter-parity-extra.test.ts (real Oxigraph
# ↔ Blazegraph parity), keyed off
# `BLAZEGRAPH_PARITY_URL` so the parity test
# runs against an isolated namespace and
# cannot be wiped mid-run.
#
# continue-on-error so a failure here surfaces as the storage
# suite's missing-engine error rather than a separate red lane.
run: |
set -e
create_namespace() {
local ns="$1"
local props="/tmp/${ns}.props"
# Quads mode requires disabling inference (NoAxioms) and truth
# maintenance — Blazegraph rejects the namespace creation
# otherwise with: "com.bigdata.rdf.store.AbstractTripleStore.quads
# does not support inference".
{
echo 'com.bigdata.rdf.store.AbstractTripleStore.quads=true'
echo 'com.bigdata.rdf.store.AbstractTripleStore.statementIdentifiers=false'
echo 'com.bigdata.rdf.store.AbstractTripleStore.axiomsClass=com.bigdata.rdf.axioms.NoAxioms'
echo 'com.bigdata.rdf.sail.truthMaintenance=false'
echo "com.bigdata.rdf.sail.namespace=${ns}"
} > "$props"
echo "--- ${props} ---" && cat "$props" && echo '---'
curl -fsS -X POST -H 'Content-Type: text/plain' \
--data-binary @"$props" \
http://localhost:9999/bigdata/namespace
# Verify the namespace responds to SPARQL.
curl -fsS "http://localhost:9999/bigdata/namespace/${ns}/sparql?query=ASK%20%7B%7D" >/dev/null
echo "blazegraph ${ns} (quads) namespace ready"
}
create_namespace "dkgq"
create_namespace "dkgq-parity"
continue-on-error: true

- name: "Core (442 tests)"
run: pnpm --filter @origintrail-official/dkg-core test
- name: "Storage (78 tests)"
env:
BLAZEGRAPH_URL: http://localhost:9999/bigdata/namespace/dkgq/sparql
# r31-10 (ci.yml:256): isolated namespace for the parity
# suite so parallel vitest workers can't have one file's
# `DROP ALL` wipe another's fixture mid-test.
BLAZEGRAPH_PARITY_URL: http://localhost:9999/bigdata/namespace/dkgq-parity/sparql
run: pnpm --filter @origintrail-official/dkg-storage test
- name: "Chain unit (46 tests)"
run: pnpm --filter @origintrail-official/dkg-chain test
Expand Down
7 changes: 6 additions & 1 deletion .github/workflows/codex-review.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,12 @@ jobs:
review:
name: Codex Review
runs-on: ubuntu-latest
timeout-minutes: 15
# 30 min ceiling: gpt-5.4 / effort: high can take ~25 min on a
# large (>30k-line) diff, so 15 min was hitting the cap mid-stream.
# The concurrency group above already cancel-in-progress, so a
# new push will kill any still-running review — there's no
# downside to a generous ceiling.
timeout-minutes: 30
# Skip fork PRs - they cannot access repository secrets
if: github.event.pull_request.head.repo.full_name == github.repository

Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/npm-continuous-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,12 +57,12 @@ jobs:
# workflow runs the full TORNADO / BURA / KOSAVA matrix in parallel on
# the same commit and is the authoritative test gate (enforced via
# branch protection on merges to v10-rc). Re-running `pnpm turbo test`
# in this publish workflow was redundant with `ci.yml` AND incompatible
# with the `accept-red-ci` policy on v10-rc: a deliberately-red
# PROD-BUG sentinel test (e.g. network-sim K-4/K-5 — no seeded RNG, no
# libp2p-parity harness) would fail this step and block every dev
# pre-release, even though CI was already reporting the same red state
# as documented bug evidence. See .test-audit/BUGS_FOUND.md.
# in this publish workflow was redundant with `ci.yml` AND
# incompatible with the `accept-red-ci` policy on v10-rc: a
# deliberately-red PROD-BUG sentinel test (e.g. network-sim
# K-4/K-5 — no seeded RNG, no libp2p-parity harness) would fail
# this step and block every dev pre-release, even though CI was
# already reporting the same red state as documented bug evidence.

- name: Compute dev version suffix
id: version
Expand Down
12 changes: 6 additions & 6 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -100,12 +100,12 @@ jobs:

# NOTE: tests are intentionally NOT re-run here. The main `ci.yml`
# workflow is the authoritative test gate on the source commit.
# Re-running `pnpm turbo test` in release-preflight was redundant with
# `ci.yml` AND incompatible with the `accept-red-ci` policy on v10-rc:
# deliberately-red PROD-BUG sentinel tests (e.g. network-sim K-4/K-5)
# would block every tagged release even though CI was already reporting
# the same red state as documented bug evidence. See
# .test-audit/BUGS_FOUND.md for the sentinel inventory.
# Re-running `pnpm turbo test` in release-preflight was redundant
# with `ci.yml` AND incompatible with the `accept-red-ci` policy on
# v10-rc: deliberately-red PROD-BUG sentinel tests (e.g.
# network-sim K-4/K-5) would block every tagged release even though
# CI was already reporting the same red state as documented bug
# evidence.

markitdown-assets:
needs:
Expand Down
Loading
Loading