Rust framework for running llama.cpp in the browser via WebAssembly for private local inference.
llama-cpp-wasm-framework is an open-source framework for local LLM deployment and the Ollama ecosystem.
Rust framework for running llama.cpp in the browser via WebAssembly for private local inference.
Topics: wasm llama-cpp browser-ai
- 🚀 Production-ready framework targeting local llm use cases
- 🔧 Easy to integrate into existing projects
- 📦 Zero-configuration defaults with full customization support
- 🧪 Comprehensive test coverage
- 📖 Well-documented API with examples
cargo add llama-cpp-wasm-framework// llama-cpp-wasm-framework — quick start example
// See docs/ for full API referenceSee docs/ for full documentation and advanced usage examples.
Contributions are welcome! Please read CONTRIBUTING.md first.
- Fork the repository
- Create your feature branch (
git checkout -b feature/my-feature) - Commit your changes (
git commit -am 'feat: add my feature') - Push to the branch (
git push origin feature/my-feature) - Open a Pull Request