Skip to content

๐Ÿค– Local AI Assistant powered by Ollama Flask backend + PySide6 UI. Fully offline chat with customizable persona , chat history , and UI themes . ๐Ÿ†• Now includes chat memory (remembers last 8 messages), a profile system for custom user info, and model selection from installed Ollama models. Default is gemma3:4b, but future updates wil

Notifications You must be signed in to change notification settings

rillToMe/local-ai-assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

57 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Local AI Assistant - Changli UI

Python Flask PySide6 Ollama Status License


๐Ÿšง Status: This project will no longer be developed, because every time Ollama releases an update, new bugs appear.
In the future, I will rebuild it as a standalone application, without relying on any other software. ๐Ÿšง


๐Ÿ“– Description

Local AI Assistant is an offline/local AI chat application featuring:

  • Backend built with Flask โ†’ API communication and chat storage,
  • Frontend (UI) built with PySide6 (Qt) โ†’ interactive chat interface.
  • Ollama Integration โ†’ run local AI models (default: gemma3:4b, now supports model selection).
  • Multi-language system with 9 languages and layered fallback.

This project is designed to run AI fully locally, with customizable identity, persona, profile, memory, and chat history.


๐Ÿ“‚ Project Structure

local-ai-assistant/
โ”œโ”€ app.py                    
โ”‚
โ”œโ”€ backend/
โ”‚  โ”œโ”€ i18n/
โ”‚  โ”‚   โ”œโ”€ ar.json
โ”‚  โ”‚   โ”œโ”€ en_gb.json
โ”‚  โ”‚   โ”œโ”€ en_us.json
โ”‚  โ”‚   โ”œโ”€ id.json
โ”‚  โ”‚   โ”œโ”€ es.json
โ”‚  โ”‚   โ”œโ”€ ja.json
โ”‚  โ”‚   โ”œโ”€ ko.json
โ”‚  โ”‚   โ”œโ”€ pt.json
โ”‚  โ”‚   โ””โ”€ zh.json
โ”‚  โ”œโ”€ locales/
โ”‚  โ”‚   โ”œโ”€ ar.json
โ”‚  โ”‚   โ”œโ”€ en_gb.json
โ”‚  โ”‚   โ”œโ”€ en_us.json
โ”‚  โ”‚   โ”œโ”€ id.json
โ”‚  โ”‚   โ”œโ”€ es.json
โ”‚  โ”‚   โ”œโ”€ ja.json
โ”‚  โ”‚   โ”œโ”€ ko.json
โ”‚  โ”‚   โ”œโ”€ pt.json
โ”‚  โ”‚   โ””โ”€ zh.json
โ”‚  โ”œโ”€ config.py      
โ”‚  โ”œโ”€ core.py           
โ”‚  โ”œโ”€ i18n.py           
โ”‚  โ”œโ”€ persona.py            
โ”‚  โ”œโ”€ storage.py             
โ”‚  โ”œโ”€ ollama_client.py       
โ”‚  โ””โ”€ routes.py              
โ”‚
โ”œโ”€ ui/
โ”‚  โ”œโ”€ main.py               
โ”‚  โ”œโ”€ chat_window.py         
โ”‚  โ”œโ”€ client.py              
โ”‚  โ”œโ”€ worker.py                            
โ”‚  โ””โ”€ widgets/              
โ”‚     โ”œโ”€ settings.py         
โ”‚     โ”œโ”€ history.py          
โ”‚     โ”œโ”€ bubbles.py          
โ”‚     โ””โ”€ identity.py        
โ”‚
โ”œโ”€ data/                    
โ”‚  โ”œโ”€ chat_history.json
โ”‚  โ”œโ”€ chat_sessions.json
โ”‚  โ”œโ”€ ui_chat_config.json
โ”‚  โ””โ”€ config.json
โ”œโ”€ config.json
โ”œโ”€ requirement.txt  
โ””โ”€ README.md                 

๐Ÿ— Architecture

Architecture Diagram


โšก Features

  • ๐Ÿ–ฅ Modern UI using PySide6 (Qt)
  • ๐Ÿ“ Chat History โ†’ rename, delete, or continue past sessions
  • ๐ŸŽญ Custom Persona โ†’ change AI name, user name, and prompt
  • ๐Ÿ‘ค Profile System โ†’ add personal info (e.g. What do you do, Anything else the AI should know)
  • ๐Ÿง  Chat Memory โ†’ AI remembers up to 32 previous messages
  • ๐ŸŒ Multi-language Support โ†’ 9 languages, native names in UI, with layered fallback (en_us โ†’ id โ†’ target) English (US, UK), Bahasa Indonesia, ๆ—ฅๆœฌ่ชž, ํ•œ๊ตญ์–ด, ไธญๆ–‡๏ผˆ็ฎ€ไฝ“๏ผ‰, Portuguรชs, Espaรฑol, ุงู„ุนุฑุจูŠุฉ.
  • ๐ŸŽจ Custom Background โ†’ solid color or custom image
  • โš™๏ธ Flask Backend with /chat, /chats, /config endpoints
  • ๐Ÿค– Ollama Integration โ†’ run local AI models; default gemma3:4b, now supports model selection
  • ๐Ÿ“‚ All data & configs stored locally under data/

๐Ÿ› ๏ธ Installation & Setup

1. Clone Repository

git clone https://github.com/rillToMe/local-ai-assistant.git
cd local-ai-assistant

2. Create Virtual Environment (recommended)

python -m venv venv
source venv/bin/activate     # Linux/Mac
venv\Scripts\activate        # Windows

3. Install Dependencies

pip install -r requirements.txt

Main dependencies:

  • flask
  • flask-cors
  • requests
  • PySide6

โš ๏ธ Note: Make sure you have installed Ollama and the required models (gemma3:4b by default). Other models can be selected if installed.

4. Run the App

python app.py
  • Flask backend will start on http://127.0.0.1:5000
  • PySide6 UI will automatically open

๐ŸŽฎ Usage

  • Settings โ†’ change background, update Identity & Prompt
  • History โ†’ view, rename, or continue previous sessions
  • Profile โ†’ add info about yourself (used by AI in responses)
  • Language โ†’ choose UI language from 9 available
  • Models โ†’ select available Ollama models installed on your system

๐Ÿงฉ Model Recommendations

Gemma Docs Qwen HF DeepSeek HF GPT-OSS Ollama Models

Based on official benchmarks and community tests:

Model Min RAM (CPU-only) Approx. GPU VRAM (BF16 / 4-bit) Notes
gemma3:1b โ‰ฅโ€ฏ2โ€ฏGB RAM ~1.5โ€ฏGB / ~0.9โ€ฏGB Lightweight - runs on old notebooks, but slow (~7โ€“10โ€ฏtokens/sec) getdeploying.com, windowscentral.com
qwen3:1.8b โ‰ฅโ€ฏ2โ€ฏGB RAM ~2โ€ฏGB / ~1โ€ฏGB (est.) Slightly better reasoning - light enough for laptops
gemma3:4b โ‰ฅโ€ฏ4โ€ฏGB RAM ~6.4โ€ฏGB / ~3.4โ€ฏGB Recommended default - good speed & quality getdeploying.com, ai.google.dev
qwen3:4b โ‰ฅโ€ฏ4โ€ฏGB RAM ~6โ€ฏGB / ~3โ€ฏGB (est.) Balanced - strong chat & reasoning
gemma3:12b โ‰ฅโ€ฏ9โ€ฏGB RAM ~20โ€ฏGB / ~8.7โ€ฏGB Requires strong GPU or high RAgetdeploying.com, ai.google.dev
qwen3:8b โ‰ฅโ€ฏ9โ€ฏGB RAM ~18โ€ฏGB / ~8โ€ฏGB (est.) Good quality & context
deepseek-r1:8b โ‰ฅโ€ฏ9โ€ฏGB RAM ~18โ€ฏGB / ~8โ€ฏGB (est.) Specialized reasoning
gemma3:27b โ‰ฅโ€ฏ18โ€ฏGB RAM ~46โ€ฏGB / ~21โ€ฏGB Heavy - best on high-end GPUs or servers getdeploying.com, ai.google.dev
gpt-oss:20b โ‰ฅโ€ฏ32โ€ฏGB RAM ~40โ€ฏGB / ~20โ€ฏGB (est.) Large - better long context
gpt-oss:120b โ‰ฅโ€ฏ128โ€ฏGB RAM / Multi-GPU ~120โ€ฏGB+ / 60โ€ฏGB+ (est.) Experimental - extremely heavy compute requirement

๐Ÿ“Œ Roadmap / Todo

  • Fix UI crash bugs
  • Add multi-tab chat support
  • Add chat export/import
  • Optimize performance for long requests
  • Expand model integration beyond Ollama

โš ๏ธ Notes

  • This is still in early development, expect frequent bugs and issues
  • UI/UX is minimal for now, focused on core functionality
  • Default persona is simple, but can be extended with custom prompts

About

๐Ÿค– Local AI Assistant powered by Ollama Flask backend + PySide6 UI. Fully offline chat with customizable persona , chat history , and UI themes . ๐Ÿ†• Now includes chat memory (remembers last 8 messages), a profile system for custom user info, and model selection from installed Ollama models. Default is gemma3:4b, but future updates wil

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages