Skip to content
This repository was archived by the owner on Oct 29, 2025. It is now read-only.

Stewared/StewbotLLMAgent

Repository files navigation

A project to revive the LLM responses in Stewbot by running self-hosted LLM agents on multiple servers with GPUs.





Client Node Setup

For linux, ollama can be installed with:

curl -fsSL https://ollama.com/install.sh | sh

This adds an ollama service that, while not incompatable with this script, is unecessary. To disable it:

sudo systemctl disable ollama

For windows, ollama can be installed with:

winget install Ollama.Ollama




TODOs

Have all agents tunnel data through a stewbot VPN to allow all non-local GPU nodes to be accessed by the server.

I need to add a windows alternative to serve.sh. Possibly rewrite more of it in node.js?


All issues can be blamed on @Reginald-Gillespie

About

The start of an AI LLM agent to run on GPU servers, providing Stewbot with LLM responses.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors