Skip to content

sonnyb9/selfhosted-agent

Repository files navigation

Self-Hosted AI Agent

A production-ready AI agent framework built from scratch. Learn how agents work by building one yourself.

Status: ✅ Ready for deployment (v0.1)


Features

Local LLM via Ollama (qwen2.5-coder, llama3.3, mistral, etc.)
Tool system with safe file operations (read, write, list)
Context management with sliding window and token budgets
Safety controls with directory sandboxing and approval gates
Extensible - easy to add custom tools
Well-tested - comprehensive test suite included
Cross-platform - Linux, macOS, Windows support


Quick Start (3 minutes)

1. Install Ollama

Download from https://ollama.com/download

2. Pull a model

ollama pull qwen2.5-coder:7b

3. Setup and run

Linux/macOS:

./setup.sh
./run.sh

Windows:

setup.bat
run.bat

That's it! You now have a working AI agent.

Full guide: QUICKSTART.md


What Can It Do?

The agent has built-in tools for file operations:

> List the files in the current directory

> Read the file called README.md

> Create a Python script that prints "Hello, world!"

The agent will:

  1. Understand your request
  2. Choose the right tool
  3. Ask for approval (safety first!)
  4. Execute and report results

Add your own tools - see src/tools/ for examples.


Project Structure

selfhosted-agent/
├── src/
│   ├── main.py           # Entry point
│   ├── agent.py          # Agent loop & orchestration
│   ├── llm.py            # Ollama LLM interface
│   ├── config.py         # Configuration management
│   ├── context.py        # Conversation context & memory
│   └── tools/
│       ├── base.py       # Tool base classes
│       └── file_ops.py   # File operation tools
├── tests/                # Comprehensive test suite
│   ├── test_tools.py
│   └── test_context.py
├── docs/                 # Full documentation
├── config.json           # Configuration file
├── setup.sh / setup.bat  # Setup scripts
├── run.sh / run.bat      # Run scripts
├── QUICKSTART.md         # Quick start guide
├── DEPLOYMENT.md         # Deployment guide
└── TROUBLESHOOTING.md    # Common issues & solutions

Documentation

Getting Started:

Deep Dives:


Configuration

Edit config.json to customize behavior:

{
  "model": "qwen2.5-coder:7b",
  "temperature": 0.7,
  "max_turns": 10,
  "safe_dir": "/home/user/workspace",
  "auto_approve": false
}

Key settings:

  • model - Which Ollama model to use
  • safe_dir - Restrict file access to this directory
  • auto_approve - Auto-approve safe operations (dangerous!)

Full config reference: DEPLOYMENT.md


Requirements

  • Python 3.11+
  • Ollama (https://ollama.com)
  • 8+ GB RAM (16 GB recommended)
  • Linux, macOS, or Windows

Development

Run tests

source venv/bin/activate
python -m pytest tests/ -v

Add a custom tool

  1. Create a new class in src/tools/ inheriting from Tool
  2. Implement name, description, parameters, execute()
  3. Register it in Agent._register_tools()

Example in docs/tool-system.md.


Why Build This?

Learning Goals:

  • ✅ Understand how AI agents work internally
  • ✅ Learn LLM API integration patterns
  • ✅ Master tool calling and function execution
  • ✅ Apply context and memory management strategies
  • ✅ Build safety and sandboxing controls
  • ✅ Create reusable infrastructure for future projects

From the docs:

Building your own agent teaches you what production frameworks (OpenClaw, LangChain, etc.) do under the hood. You'll use those tools more effectively by understanding the internals.


Roadmap

Phase 1: ✅ Complete (Basic inference + file tools)
Phase 2: ✅ Complete (Full tool calling loop)
Phase 3: ✅ Complete (Context management)
Phase 4: 🔜 Next (Shell commands, Python exec)
Phase 5: Future (Multi-agent, web UI, integrations)

See docs/roadmap.md for details.


License

MIT License - see LICENSE file.


Questions?

  • Documentation: See docs/ folder
  • Issues: Open a GitHub issue
  • Examples: Check examples/ folder

Happy building! 🚀

Built as a learning project to demystify AI agent architecture. Now production-ready for your own experiments.

About

Self-hosted agent infrastructure

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors