Skip to content

Fix/4 prepare for seo#18

Merged
artdaw merged 5 commits intomainfrom
fix/4_prepare_for_SEO
Apr 24, 2026
Merged

Fix/4 prepare for seo#18
artdaw merged 5 commits intomainfrom
fix/4_prepare_for_SEO

Conversation

@artdaw
Copy link
Copy Markdown
Owner

@artdaw artdaw commented Apr 24, 2026

No description provided.

artdaw added 5 commits April 25, 2026 00:10
HashRouter fragments aren't crawlable, so every route showed identical
meta tags and sitemap entries. Switch to BrowserRouter for clean URLs;
public/404.html stashes the intended pathname in sessionStorage and
redirects to '/', which main.tsx replays before the router boots.
.nojekyll disables Jekyll processing so dotfiles (e.g. .well-known)
ship in the Pages artifact.
Add src/components/SEO.tsx, a small runtime component each page renders
to update <title>, description/keywords/robots meta, canonical, Open
Graph, Twitter cards, and a page-specific JSON-LD block. Wire it into
all six pages with route-specific copy (TechArticle for docs pages,
SoftwareSourceCode for the landing page). Enrich index.html with a
site-wide @graph JSON-LD (Organization, WebSite, SoftwareSourceCode,
TechArticle), canonical, theme-color, font preconnect, and a <noscript>
pointer to the machine-readable surfaces for non-JS crawlers.
Add a Vite plugin that mirrors spec/SPECIFICATION.md and
spec/grammar/kndl.ebnf from the repo root into /spec/* on both dev and
build, and regenerates /llms-full.txt (spec + EBNF + example index) at
build time so agents can slurp everything in one request.

Ship the discovery stack under public/:
- /llms.txt — concise index in the llmstxt.org format
- /robots.txt — explicit allows for GPTBot, ClaudeBot, PerplexityBot,
  Google-Extended, CCBot, Applebot-Extended, Meta-ExternalAgent, et al.
- /sitemap.xml — all six SPA routes plus the raw machine-readable URLs
- /.well-known/security.txt — security contact per securitytxt.org
- /examples/*.kndl — eight curated snippets (basic-building, intent-
  overheat, process-shipment, query-aggregation, healthcare-observation,
  fintech-transaction, robotics-pose, logistics-trace) plus index.md
  and a standalone index.html so /examples/ returns 200

The Vite plugin needs node types, so add @types/node and set
tsconfig.node.json "types": ["node"].
GitHub Pages has no server-side SPA fallback, so direct hits on /spec,
/spec/full, /workflow, /mcp, /explorer would return the 404.html with a
404 status — poor for SEO even though the content renders after the JS
redirect. Stamp one HTML shell per route at build time so each URL is
served with status 200 and the right <title>, <meta>, canonical, Open
Graph, Twitter cards, and route-specific JSON-LD already in the markup.

At runtime the <SEO> component finds the same tags (via data-seo
selectors) and overwrites them in place with matching values, so there
is no duplication or flash of old meta.

Wire as the final `build` step and a standalone `prerender` script.
@artdaw artdaw self-assigned this Apr 24, 2026
@artdaw artdaw merged commit 2b3febe into main Apr 24, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant