Skip to content

dahlinomine/llama-cpp-wasm-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

llama-cpp-wasm-framework

Rust framework for running llama.cpp in the browser via WebAssembly for private local inference.

Language License PRs Welcome

Overview

llama-cpp-wasm-framework is an open-source framework for local LLM deployment and the Ollama ecosystem.

Rust framework for running llama.cpp in the browser via WebAssembly for private local inference.

Topics: wasm llama-cpp browser-ai

Features

  • 🚀 Production-ready framework targeting local llm use cases
  • 🔧 Easy to integrate into existing projects
  • 📦 Zero-configuration defaults with full customization support
  • 🧪 Comprehensive test coverage
  • 📖 Well-documented API with examples

Installation

cargo add llama-cpp-wasm-framework

Quick Start

// llama-cpp-wasm-framework — quick start example
// See docs/ for full API reference

Usage

See docs/ for full documentation and advanced usage examples.

Contributing

Contributions are welcome! Please read CONTRIBUTING.md first.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/my-feature)
  3. Commit your changes (git commit -am 'feat: add my feature')
  4. Push to the branch (git push origin feature/my-feature)
  5. Open a Pull Request

License

MIT © dahlinomine

About

Rust framework for running llama.cpp in the browser via WebAssembly for private local inference.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors