Samvidha is a state-of-the-art GraphRAG (Graph Retrieval-Augmented Generation) platform designed for high-precision constitutional legal research. It transforms the Indian Constitution into a dynamic, queryable intelligence network, allowing users to trace the evolution of laws across decades of amendments.
Samvidha uses a multi-agent orchestration layer powered by LangGraph, combining the structural precision of a Graph Database with the semantic depth of Vector Search.
- Intent Classification: Decomposes user queries into constitutional entities (Articles, Amendments, Parts).
- Parallel Retrieval:
- Graph Routing (Neo4j): Executes multi-hop Cypher queries to find cross-references (e.g., "Article 358 suspends Article 19").
- Semantic Search (Qdrant): Fetches raw text chunks using dense vector embeddings.
- Hybrid Reranking: Uses a BGE-Reranker model to score and fuse results from both graph and vector paths.
- Temporal Organization: Chronologically sorts modifications to ensure the "Hierarchy of Truth" favors the latest amendments.
- Legal Reasoning: A specialized LLM synthesizes the context into a formal legal opinion with strict citation requirements.
- Quality Assurance: A critique agent validates the answer against the retrieved facts, triggering a recursive "Refine" loop if gaps are detected.
⚡ Real-Time Intelligence Streaming: The entire workflow is served via Server-Sent Events (SSE). The frontend terminal streams execution traces live as agents complete their tasks, providing zero-latency feedback before the final opinion is even finished.
- Graph Database: Neo4j (Local Docker or Neo4j Aura Cloud).
- Vector Database: Qdrant (Local Docker or Qdrant Cloud).
- Orchestration: LangGraph (for complex node-based agent workflows).
- Models:
- Reasoning: Llama 3.3 70B via Groq (Ultra-fast inference).
- Reranking: BAAI/bge-reranker-base (Cross-Encoder).
- Embeddings: HuggingFace/gte-large.
- 3D Visualization:
react-force-graph-3d(Three.js) for interactive constitutional mapping. - State Management: Zustand (real-time store for search results and execution traces).
- Styling: Vanilla CSS with a "Classified Dossier" aesthetic.
├── agent/
│ ├── core/ # LangGraph workflow definitions & Ingestion logic
│ ├── services/ # Core intelligence modules (Neo4j, Qdrant, Reasoner)
│ ├── utils/ # Configuration & system settings
│ ├── .env.template # Environment configuration template
│ └── main.py # FastAPI entry point & Docker lifecycle management
├── frontend/
│ ├── src/
│ │ ├── components/ # 3D Graph, Legal Dossier, System Terminal
│ │ └── store/ # Zustand global state (LexStore)
├── Dockerfile.neo4j # Custom Neo4j build with data pre-loading
├── Dockerfile.qdrant # Custom Qdrant build with data pre-loading
├── neo4j_data/ # Persistent graph storage
└── qdrant_storage/ # Persistent vector storage
- Python 3.10+
- Node.js 20+
- Docker (required for automated database lifecycle)
- Groq API Key
- Navigate to the
agentdirectory:cd agent - Create and activate a virtual environment:
python -m venv .venv source .venv/bin/activate - Install dependencies:
pip install -r requirements.txt
- Configure environment variables:
Samvidha supports Hybrid Cloud Mode. Copy the template to get started:
Then, fill in your
cp .env.template .env
GROQ_API_KEY. To use cloud databases, provideNEO4J_URIandQDRANT_ENDPOINT; otherwise, the system defaults to local Docker. - Launch the API:
Note: The API will automatically start Neo4j and Qdrant Docker containers on startup.
uvicorn main:app --reload
The agent is pre-configured for deployment as a Docker Space on Hugging Face:
- Create a new Space on Hugging Face and select Docker as the SDK.
- Set the Repository Structure to point to the
agent/directory or push only theagent/folder to the space. - Add your secrets (from
.env) to the Hugging Face Space Secrets settings:GROQ_API_KEY,NEO4J_URI,NEO4J_PASSWORD,QDRANT_ENDPOINT, etc.
- The space will automatically use the provided
agent/Dockerfileand run on port 7860.
- Navigate to the
frontenddirectory:cd frontend - Install dependencies:
npm install
- Start the development server:
npm run dev
Samvidha features a Zero-Config Lifecycle Manager. It automatically detects your deployment environment based on your .env configuration:
- Local Developer Mode: If no cloud URIs are detected, the API automatically pulls and launches Neo4j and Qdrant Docker containers on startup. It handles the entire lifecycle (start/stop) and persists data to local volumes.
- Hybrid Cloud Mode: If a
NEO4J_URI(Aura) orQDRANT_ENDPOINT(Cloud) is provided, the system suppresses the local Docker lifecycle and establishes high-performance remote connections. This is recommended for production-grade reliability and larger datasets.
Unlike traditional RAG systems that treat law as flat text, Samvidha models the Constitution as a Directed Acyclic Graph (DAG) of legal effects:
| Relationship | Description | Data Impact |
|---|---|---|
| AMENDS | A standard modification to text. | Updates property values. |
| OVERRIDES | A "Notwithstanding" clause. | Prioritizes target node in logic. |
| SUSPENDS | Operational pause (e.g., Art 358). | Marks target as "Inactive" in context. |
| SUBSTITUTES | Full textual replacement. | Deprecates previous versions. |
| REPEALS/OMITS | Removal from the Constitution. | Logic-gate for removal from current law. |
| REFERS_TO | Cross-article citation. | Triggers automatic multi-hop retrieval. |
The platform features a proprietary Nexus Discovery Interface:
- 3D Intelligence Map: A live Three.js visualization of the entire constitutional network. Nodes are color-coded (Red: Amendment, Yellow: Article, Blue: Clause).
- Temporal Timeline: A scrubbable timeline that allows users to travel back to any year (e.g., 1976) and see exactly which provisions existed at that point in history.
- System Execution Trace: A real-time terminal readout of the LangGraph agent's thinking process, providing transparency into how the AI is fetching and validating facts.
- Adaptive Architecture: A fully responsive mobile experience that transforms the Legal Dossier into a Bottom Sheet and optimizes the 3D Graph into a low-power, non-interactive cinematic background on smaller viewports.
Generates 1-4 distinct Cypher queries based on the query type (Discovery Mode vs. Deep Dive). It manages type-safety (integers for amendments, strings for articles) to ensure zero-miss retrieval.
The "Supreme Court Researcher" agent. Enforces a strict "Hierarchy of Truth" where Graph relationships (DELETED, AMENDS) override semantic chunks if contradictions occur.
Fuses graph structural data with keyword search. It applies "temporal boosting" to ensure that the latest amendments are prioritized in the final answer synthesis.
The system includes dedicated scripts for high-fidelity data population:
graph_ingestion.py: Uses multi-pass LLM extraction to identify 9 relationship types (AMENDS, OVERRIDES, SUSPENDS, etc.) from legal summaries.validate_ingestion.py: Verifies graph completeness by comparing node counts against expected legal provisions.
Samvidha is an AI-powered intelligence tool designed for research assistance. While it maintains a high degree of precision through GraphRAG, it should be used to augment, not replace, professional legal consultation. Please note that AI-generated answers may be incorrect, partially correct, or missing critical nuances; always verify findings against official Gazette notifications.