Skip to content
#

hallucination-prevention

Here are 23 public repositories matching this topic...

Reliable research infrastructure for AI agents. Evidence-backed web search with citations, confidence scores, and Clarity anti-hallucination. MCP server, REST API, Python SDK.

  • Updated Apr 12, 2026
  • TypeScript
re-think_protocol

re!think it. System prompt teaching LLMs to execute two core tasks: complex answers without hallucinations, and creative ideas without clichés. Written in math-like logic, which LLMs parse better than plain language. Built for mid-to-high complexity tasks, featuring a Bypass branch to execute simple prompts directly without added cognitive overhead

  • Updated Mar 23, 2026

TrustScoreEval: Trust Scores for AI/LLM Responses — Detect hallucinations, flags misinformation & Validate outputs. Build trustworthy AI.

  • Updated Oct 13, 2025
  • Python

Theorem of the Unnameable [⧉/⧉ₛ] — Epistemological framework for binary information classification (Fixed Point/Fluctuating Point). Application to LLMs via 3-6-9 anti-loop matrix. Empirical validation: 5 models, 73% savings, zero hallucination on marked zones.

  • Updated Feb 1, 2026
  • Python
hlft-legality-engine

Semantic Processing Unit (SPU): A neurosymbolic AI architecture replacing token prediction with differentiable matrix operators. It guarantees 100% logical accuracy, structural safety, and zero-error invariants on OOD data by decoupling semantic parsing from hardware-accelerated matrix algebra.

  • Updated Feb 19, 2026

Improve this page

Add a description, image, and links to the hallucination-prevention topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the hallucination-prevention topic, visit your repo's landing page and select "manage topics."

Learn more