Skip to content

Digiyang/ai-chat-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AI Chat Desktop Application

License: MIT Python 3.8+ Status: Alpha

⚠️ ALPHA VERSION - This application is currently in alpha. While functional, it may contain bugs and requires more testing. Use at your own discretion and please report any issues!

A cross-platform desktop AI chat application powered by Hugging Face Transformers for completely private, local AI inference. Chat with state-of-the-art LLMs, upload documents, and manage conversationsβ€”all without sending data to external servers.

✨ Features

Current Features (v1.6-alpha)

πŸ€– AI Integration

  • Local Inference: Run AI models completely offline using Hugging Face Transformers
  • No API Keys Required: Everything runs locally on your machine
  • GPU & CPU Support: Automatic device detection with manual toggle
  • Streaming Responses: Real-time token-by-token response generation
  • 1000+ Models Available: Browse and download from Hugging Face's model hub

πŸ“₯ Model Management

  • Dynamic Model Browser: Browse 1000+ models from Hugging Face
  • Task-Based Filtering: Filter models by task type (text-generation, translation, summarization, etc.)
  • One-Click Downloads: Download models directly from the app
  • Model Details: View size, parameters, downloads, likes, and tags before downloading
  • Delete Models: Free up disk space by removing unwanted models
  • Status Indicators: Green checkmarks show which models are already downloaded

πŸ’¬ Chat Interface

  • Multiple Conversations: Create and manage unlimited conversations
  • Conversation History: All chats saved to local SQLite database
  • Rename Conversations: Give your chats meaningful names
  • Message Persistence: Never lose your conversation history

πŸ“„ Document Support

  • File Upload: Upload PDF, Word (docx), and TXT files
  • Document Parsing: Automatic extraction of text from documents

βš™οΈ Device Management

  • Device Status Display: See if you're using GPU or CPU
  • Visual Indicators: 🟒 Green (GPU active) | 🟑 Yellow (CPU active) | πŸ”΄ Red (GPU unavailable)
  • Easy Toggle: Switch between CPU and GPU with one click
  • Automatic Detection: Detects CUDA-capable GPUs automatically

πŸ“‹ What's Coming Next

  • CI/CD Pipeline: Automated builds and packaging
  • Standalone Executables: PyInstaller packaging for Windows/Linux/macOS
  • GitHub Releases: Automated release generation
  • Code Signing: Signed binaries for macOS and Windows
  • Model Quantization Support: Run larger models with less RAM (GGUF, 8-bit, 4-bit)
  • Themes: Dark mode and custom themes
  • Voice Input/Output: Speech-to-text and text-to-speech
  • Multi-language UI: Internationalization support
  • Image Generation: Support for text-to-image models (Stable Diffusion)
  • More Tests: Comprehensive test coverage
  • Performance Monitoring: Built-in profiling and optimization tools

πŸ—οΈ Technology Stack

  • GUI Framework: PyQt6
  • AI Framework: Hugging Face Transformers + PyTorch
  • Model Source: Hugging Face Hub
  • Document Parsing: PyMuPDF (PDF), python-docx (Word)
  • Database: SQLite (local conversation storage)
  • Device Support: CUDA (GPU) or CPU

πŸ“¦ Installation

Prerequisites

  1. Python 3.8+ (Python 3.13+ recommended)
  2. Git (for cloning the repository)
  3. (Optional) CUDA: For GPU acceleration

Quick Start

  1. Clone the repository:

    git clone https://github.com/yourusername/ai-chat-app.git
    cd ai-chat-app
  2. Create virtual environment:

    python -m venv venv
    source venv/bin/activate  # On Windows: venv\Scripts\activate
  3. Install dependencies:

    pip install -r requirements.txt
  4. Run the application:

    ./run.sh  # On Windows: python src/main.py
  5. Download your first model:

    • Click "πŸ“₯ Browse Models" in the app
    • Search for "Qwen/Qwen2.5-3B-Instruct" (small, fast, good quality)
    • Click "Download Model"
    • Wait for download to complete
    • Start chatting!

πŸš€ Usage

Starting a Conversation

  1. Click "+ New Conversation" in the left sidebar
  2. Select a downloaded model from the dropdown
  3. Type your message and press Send
  4. Watch the AI respond in real-time!

Uploading Documents

  1. Click "Upload File" button
  2. Select a PDF, Word, or TXT file
  3. The file appears as a badge above the input
  4. Type your question about the document
  5. The AI will use the document content in its response

Managing Models

  1. Click "πŸ“₯ Browse Models"
  2. Use the Task dropdown to filter models by type
  3. Search for specific models
  4. Click on a model to see details (size, parameters, downloads)
  5. Click "Download Model" to add it to your collection
  6. Downloaded models show with a βœ“ green checkmark

Switching Devices

  1. Look at the top-right corner for device status
  2. 🟒 = GPU active, 🟑 = CPU active
  3. Click "Switch to GPU/CPU" to change devices
  4. Confirm the switch in the dialog
  5. The model will be reloaded on the new device

πŸ“‚ Project Structure

ai-chat-app/
β”œβ”€β”€ src/                          # Application source code
β”‚   β”œβ”€β”€ main.py                   # Application entry point
β”‚   β”œβ”€β”€ gui/
β”‚   β”‚   β”œβ”€β”€ main_window.py        # Main application window
β”‚   β”‚   └── model_browser.py      # Model browser dialog
β”‚   β”œβ”€β”€ database/
β”‚   β”‚   └── db_manager.py         # SQLite conversation storage
β”‚   β”œβ”€β”€ models/
β”‚   β”‚   └── huggingface_service.py # HF Transformers integration
β”‚   └── parsers/
β”‚       └── document_parser.py    # PDF/Word/TXT parsers
β”œβ”€β”€ tests/                        # Test files
β”‚   β”œβ”€β”€ test_setup.py             # Dependency verification
β”‚   └── test_components.py        # Component tests
β”œβ”€β”€ scripts/                      # Build & packaging
β”‚   β”œβ”€β”€ build.sh                  # Linux/macOS build script
β”‚   β”œβ”€β”€ build.bat                 # Windows build script
β”‚   └── ai-chat.spec              # PyInstaller config
β”œβ”€β”€ docs/                         # Documentation
β”‚   └── CHANGELOG.md              # Version history
β”œβ”€β”€ requirements.txt              # Python dependencies
β”œβ”€β”€ requirements-dev.txt          # Development dependencies
β”œβ”€β”€ run.sh                        # Linux/macOS launcher
└── README.md                     # This file

πŸ”’ Privacy & Security

  • No API Keys: All inference happens locally
  • No Data Collection: Your conversations never leave your machine
  • No Internet Required: After downloading models, works completely offline
  • Local Database: All data stored in local SQLite database
  • Open Source: Full transparencyβ€”review the code yourself

πŸ› Known Issues (Alpha)

  • Model downloads don't show detailed progress (only status messages)
  • Very large models (70B+) may cause memory issues on systems with limited RAM
  • First inference is slow due to model loading
  • No model unloading between chats (keeps last model in memory)
  • Device switching requires model reload

🀝 Contributing

This is an alpha project and contributions are welcome! Please:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“ Development Roadmap

Phase 1: Stability (Current - Alpha)

  • Core functionality complete
  • Bug fixes and stability improvements
  • More comprehensive testing
  • User feedback collection

Phase 2: Packaging

  • PyInstaller integration
  • Automated build pipeline
  • GitHub Actions CI/CD
  • Signed executables

Phase 3: Features

  • Advanced model management (quantization, pruning)
  • Export/import conversations
  • Themes and customization
  • Performance optimizations

Phase 4: Expansion

  • Multi-modal support (images, audio)
  • Plugin architecture
  • Cloud sync (optional)

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Hugging Face for the amazing model hub and transformers library
  • PyQt for the GUI framework
  • PyTorch for the deep learning framework
  • All the open-source model creators who make their work freely available

πŸ“ž Support

⚠️ Disclaimer

This is alpha software. While it works well for some use cases, it may contain bugs or unexpected behavior. The AI models downloaded are provided by third parties on Hugging Face. Please review model licenses and ensure you comply with them. AI-generated responses may be inaccurate, biased, or inappropriate. Use at your own discretion.


**Made with ❀️ by Moez Rjiba

Star ⭐ this repo if you find it useful!

About

Local desktop AI chat application powered by Hugging Face Transformers. Private, offline, open-source.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors