Skip to content

genson1808/gsrag

Repository files navigation

GsRag — TypeScript RAG with Knowledge Graphs

GsRag is a TypeScript library for Retrieval-Augmented Generation with knowledge graph support. It implements the LightRAG paper's approach to RAG: documents are chunked, entities and relationships are extracted via LLM, organized into a knowledge graph, and queried using graph traversal + vector search.

Library-first design. Unlike the Python reference (which is server-oriented), GsRag is designed to be embedded in applications — no HTTP server, no web UI, just a clean TypeScript API.

Installation

npm install @gsrag/gsrag
# or individual packages:
npm install @gsrag/core @gsrag/storage @gsrag/providers

Quick Start (Postgres)

import { GsRag } from "@gsrag/core";
import { OpenAICompletionAdapter, OpenAIEmbeddingAdapter } from "@gsrag/providers";
import { createPostgresStorageRegistry } from "@gsrag/storage";

const gsrag = new GsRag({
  providers: {
    completion: new OpenAICompletionAdapter({ model: "gpt-4o-mini", apiKey }),
    embedding: new OpenAIEmbeddingAdapter({ embeddingModel: "text-embedding-3-small", embeddingDimension: 1536, apiKey }),
  },
  storages: createPostgresStorageRegistry({
    connectionString: "postgresql://user:pass@localhost:5432/rag",
  }),
});

await gsrag.initialize();
await gsrag.insertDocuments({ documents: [{ content: "Ada Lovelace wrote the first algorithm.", filePath: "ada.txt" }] });
const result = await gsrag.query("Who contributed to computing?", { mode: "hybrid" });
console.log(result.content);
await gsrag.finalize();

Note: Postgres is the only production-ready storage backend. Local JSON, Qdrant, Redis, and MongoDB are still in development.

Packages

Package Description
@gsrag/core Main SDK facade, pipeline orchestration, query runtime
@gsrag/contracts TypeScript interfaces and type definitions
@gsrag/providers LLM/embedding adapters (OpenAI, Ollama, Anthropic, Gemini)
@gsrag/storage Postgres (production) — Local, Qdrant, Redis, MongoDB (in development)
@gsrag/readers Document readers (text, PDF, DOCX, PPTX, XLSX, EPUB)
@gsrag/shared Tokenizer, prompts, utilities

Features

  • 6 query modes: local, global, hybrid, naive, mix, bypass
  • LLM providers: OpenAI, Ollama, Anthropic, Claude, Gemini
  • Storage backends: Postgres (AGE + pgvector, production) — Local JSON, Qdrant, Redis, MongoDB (in development)
  • Multi-workspace: Full isolation per workspace
  • Job queue: Background document processing
  • Streaming: AsyncIterable-based streaming responses
  • Caching: LLM response + keyword extraction caching
  • Graph admin: Full CRUD on knowledge graph entities and relations
  • Custom KG import: Pre-built knowledge graph insertion

Documentation

Guide Description
Getting Started Installation, basic usage, quick start
Configuration All config options and defaults
Providers LLM provider setup and options
Storage Backend setup (local, Postgres, etc.)
Query Query modes, parameters, streaming
Document Pipeline Ingestion, chunking, deletion
Graph Admin KG CRUD, merge, custom import
Multi-Workspace Workspace isolation setup
Architecture Package structure, data flow
Advanced Custom providers, pipeline internals

Requirements

  • Bun >= 1.0 or Node.js >= 18
  • An LLM provider (OpenAI, Ollama, etc.)

Development

See examples/ for runnable examples:

License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages