π― High-Performance β’ π Real-Time Sync β’ π€ 25+ AI Models β’ β‘ Sub-100ms Response
π Live Demo β’ π Documentation β’ π§ API Reference β’ π€ Contributing
Plan Mode & Advanced Features - CappyChat v4.1.0 introduces Plan Mode for creating interactive diagrams and visualizations, enhanced observability with Better Stack logging, and persistent guest rate limiting with Upstash Redis.
π Highlights:
- π§ Plan Mode with AI Artifacts - Create interactive diagrams, flowcharts, and visualizations using Mermaid
- π URL Retrieval Tool - Comprehensive web content analysis with live crawling and AI summaries
- π‘οΈ Upstash Redis Rate Limiting - Persistent guest rate limiting across serverless functions
- π Better Stack Logging - Enhanced observability with structured logging across all API endpoints
- π PDF Thumbnail Preview - Better file visualization and management
- π¨ Enhanced Image Generation - New models for improved visual content creation
π View Full Changelog β’ π Upgrade Guide
CappyChat is a cutting-edge AI chat platform engineered for performance, scalability, and seamless user experience. Built with a local-first architecture, it delivers instant responses while maintaining data consistency across devices through intelligent cloud synchronization.
|
π€ AI Excellence
|
β‘ Performance
|
|
π¨ Advanced Features
|
π§ Developer Experience
|
| Ayush Sharma | Vranda Garg |
|---|---|
| Lead Developer & Architect | UI/UX Designer & Frontend Developer |
| Full-stack development and system architecture | User interface design and user experience |
| AI integration and performance optimization | Frontend development and responsive design |
| Database design and real-time synchronization | Design system and component architecture |
π€ Multi-Model AI Support
| Feature | Description | Models |
|---|---|---|
| Premium Models | Latest AI models with advanced capabilities | GPT-5, Claude 4, Grok 4 |
| Fast Models | Optimized for speed and efficiency | Gemini 2.5 Flash Lite, OpenAI 5 Mini, Grok 4 Fast |
| Specialized Models | Task-specific AI models | DeepSeek R1, Qwen3 Max, Claude Sonnet 3.7 |
| Image Models | Advanced image generation | Gemini 2.5 Flash Image Preview (nano banana) |
Key Benefits:
- π― Intelligent Routing - Automatic model selection based on query complexity
- π BYOK Support - Enhanced privacy with your own API keys
- π³ Flexible Pricing - Tiered access with free, premium, and super-premium models
- π Model Switching - Seamless switching between models mid-conversation
β‘ Performance & Architecture
graph LR
A[User Input] --> B[Local Storage]
B --> C[Instant UI Update]
B --> D[Background Sync]
D --> E[Cloud Database]
E --> F[Real-time Broadcast]
F --> G[Other Devices]
Performance Metrics:
- β‘ Sub-100ms local operations
- π¦ 106kB optimized bundle size
- π 99.9% sync reliability
- π± Mobile-first responsive design
Architecture Highlights:
- π Local-First Design - IndexedDB for instant responses
- βοΈ Hybrid Database - Appwrite cloud synchronization
- π Optimistic Updates - Immediate UI feedback
- π Smart Caching - Intelligent cache management
π¨ Advanced Capabilities
| Capability | Technology | Features |
|---|---|---|
| Plan Mode | Mermaid + AI Artifacts | Interactive diagrams, Flowcharts, Visualizations |
| Image Generation | Gemini 2.5 Flash Image Preview | Text-to-image, Context-aware generation |
| Voice Input | OpenAI Whisper | Multi-language, Real-time transcription |
| Web Search | Parallel AI + Tavily + Exa | Intelligent tool calling, Rich citations, Website retrieval |
| File Upload | Cloudinary | Images, PDFs with thumbnails, Documents with AI analysis |
| Collaboration | Real-time sync | Team workspaces, Member management |
| Observability | Better Stack + Upstash Redis | Structured logging, Persistent rate limiting |
Advanced Features:
- π§ Plan Mode - Create diagrams, flowcharts, and MVPs with AI assistance
- π Conversation Styles - Balanced, creative, precise modes
- π§ Global Memory - Persistent context across conversations
- π¬ Suggested Questions - AI-powered conversation suggestions
- π Analytics - Usage tracking and insights with Better Stack
- π Share & Export - Public sharing and data export
- π‘οΈ Rate Limiting - Persistent guest rate limiting with Upstash Redis
π§ Developer Experience
Modern Technology Stack:
const techStack = {
frontend: ["Next.js 15", "React 19", "TypeScript 5.8", "TailwindCSS 4.1"],
backend: ["Node.js", "Appwrite", "API Routes"],
database: ["IndexedDB", "Appwrite Cloud", "Real-time Sync", "Upstash Redis"],
deployment: ["Vercel", "Docker", "CDN"],
monitoring: ["Better Stack", "Analytics", "Error Tracking", "Performance Metrics"],
};Developer Tools:
- π οΈ TypeScript - Full type safety with strict mode
- π¨ Component Library - Shadcn/ui + Radix UI primitives
- π¦ State Management - Zustand for efficient state handling
- π Authentication - Appwrite Auth with OAuth providers
- π Admin Panel - Comprehensive system management
{
"framework": "Next.js 15.3",
"runtime": "React 19",
"language": "TypeScript 5.8",
"styling": "TailwindCSS 4.1",
"components": "Shadcn/ui + Radix",
"state": "Zustand",
"routing": "React Router 7.6",
"forms": "React Hook Form + Zod"
}{
"packageManager": "pnpm",
"buildTool": "Turbopack",
"linting": "ESLint + Next.js",
"formatting": "Prettier",
"versioning": "Semantic + Changelog"
} |
{
"runtime": "Node.js",
"framework": "Next.js API Routes",
"database": "Appwrite Cloud",
"auth": "Appwrite + OAuth",
"storage": "Cloudinary",
"ai": ["OpenRouter", "OpenAI", "Google Gemini"],
"search": ["Parallel AI", "Tavily", "Exa"],
"payments": "DODO Payments",
"logging": "Better Stack",
"rateLimit": "Upstash Redis"
}{
"local": "IndexedDB + Dexie.js",
"cloud": "Appwrite Database",
"sync": "Hybrid Local-First",
"realtime": "WebSocket + Streaming",
"caching": "Multi-layer TTL"
} |
graph TB
subgraph "Frontend Layer"
A[Next.js 15 + React 19]
B[TypeScript + TailwindCSS]
C[Zustand State Management]
end
subgraph "Data Layer"
D[IndexedDB Local Storage]
E[Appwrite Cloud Database]
F[Real-time Sync Engine]
end
subgraph "Service Layer"
G[OpenRouter AI Models]
H[Google Gemini Image Generation]
I[Tavily Web Search]
J[Cloudinary File Storage]
end
A --> D
D <--> F
F <--> E
A --> G
A --> H
A --> I
A --> J
| π’ Node.js 18+ | Download from nodejs.org |
| π¦ pnpm | npm install -g pnpm |
| βοΈ Appwrite Account | Sign up at appwrite.io |
| π API Keys | Optional for basic setup, required for full features |
π§ Step 1: Clone & Install
# Clone the repository
git clone https://github.com/CyberBoyAyush/CappyChat.git
cd CappyChat
# Install dependencies
pnpm installπ Step 2: Environment Setup
# Copy environment template
cp env.example .env.local
# Edit with your configuration
nano .env.local # or your preferred editorποΈ Step 3: Database Configuration
# Automated setup (recommended)
pnpm run setup-appwrite
# Or configure manually (see Database Setup section)π― Step 4: Launch Application
# Start development server
pnpm dev
# Open in browser
open http://localhost:3000π Success! Your CappyChat instance should now be running at http://localhost:3000
Create a .env.local file in your project root with the following variables:
# Appwrite Configuration
NEXT_PUBLIC_APPWRITE_ENDPOINT=https://cloud.appwrite.io/v1
NEXT_PUBLIC_APPWRITE_PROJECT_ID=your-project-id
NEXT_PUBLIC_APPWRITE_DATABASE_ID=your-database-id
# Collection IDs
NEXT_PUBLIC_APPWRITE_THREADS_COLLECTION_ID=threads
NEXT_PUBLIC_APPWRITE_MESSAGES_COLLECTION_ID=messages
NEXT_PUBLIC_APPWRITE_MESSAGE_SUMMARIES_COLLECTION_ID=message_summaries
NEXT_PUBLIC_APPWRITE_PROJECTS_COLLECTION_ID=projects
NEXT_PUBLIC_APPWRITE_GLOBAL_MEMORY_COLLECTION_ID=global_memory
# Server-side API Key (for admin operations)
APPWRITE_API_KEY=your-server-api-key
# Authentication URLs
NEXT_PUBLIC_AUTH_SUCCESS_URL=http://localhost:3000/auth/callback
NEXT_PUBLIC_AUTH_FAILURE_URL=http://localhost:3000/auth/error
NEXT_PUBLIC_VERIFICATION_URL=http://localhost:3000/auth/verify
# Admin Configuration
ADMIN_SECRET_KEY=your-secure-admin-key# OpenRouter (for multiple AI models)
OPENROUTER_API_KEY=your-openrouter-key
# Get from: https://openrouter.ai/settings/keys
# OpenAI (for Whisper voice transcription)
OPENAI_API_KEY=your-openai-key
# Get from: https://platform.openai.com/api-keys
# Tavily (for web search)
TAVILY_API_KEY=your-tavily-key
# Get from: https://tavily.com/
# Cloudinary (for file uploads)
NEXT_PUBLIC_CLOUDINARY_CLOUD_NAME=your-cloud-name
CLOUDINARY_API_KEY=your-cloudinary-key
CLOUDINARY_API_SECRET=your-cloudinary-secret
# Get from: https://cloudinary.com/consoleNODE_ENV=development
NEXT_PUBLIC_API_BASE_URL=http://localhost:3000Note: For production deployment, update URLs to your domain and set
NODE_ENV=production.
CappyChat uses Appwrite as the primary database with local IndexedDB for performance. Follow these steps to set up your database:
- Sign up at Appwrite Cloud or set up self-hosted instance
- Create a new project and note the Project ID
- Create a new database and note the Database ID
Create the following collections with exact names and attributes:
Threads Collection (threads)
{
threadId: string, // required, unique
userId: string, // required
title: string, // required
updatedAt: datetime, // required
lastMessageAt: datetime, // required
isPinned: boolean, // optional, default: false
tags: string[], // optional
isBranched: boolean, // optional, default: false
projectId: string, // optional
isShared: boolean, // optional, default: false
shareId: string, // optional
sharedAt: datetime // optional
}Messages Collection (messages)
{
messageId: string, // required, unique
threadId: string, // required
userId: string, // required
content: string, // required
role: string, // required: "user" | "assistant" | "system" | "data"
createdAt: datetime, // required
webSearchResults: string[], // optional
attachments: string, // optional, JSON string
model: string, // optional
imgurl: string // optional
}Projects Collection (projects)
{
projectId: string, // required, unique
userId: string, // required
name: string, // required
description: string, // optional
prompt: string, // optional
colorIndex: number, // optional
members: string[], // optional, array of user IDs
createdAt: datetime, // required
updatedAt: datetime // required
}Message Summaries Collection (message_summaries)
{
summaryId: string, // required, unique
threadId: string, // required
messageId: string, // required
userId: string, // required
content: string, // required
createdAt: datetime // required
}Global Memory Collection (global_memory)
{
userId: string, // required
memories: string[], // required
enabled: boolean, // required, default: false
createdAt: datetime, // required
updatedAt: datetime // required
}-
Enable Authentication Methods
- Email/Password authentication
- OAuth providers (Google, GitHub) - optional
-
Set Collection Permissions
- Read/Write: Authenticated users only
- Document-level security: Users can only access their own data
- Create permissions: Authenticated users
- Update/Delete permissions: Document owners only
For optimal performance, create these indexes in your Appwrite console:
-- Threads Collection
CREATE INDEX idx_threads_user_lastmessage ON threads(userId, lastMessageAt DESC)
CREATE INDEX idx_threads_user_pinned ON threads(userId, isPinned)
CREATE INDEX idx_threads_project ON threads(projectId)
-- Messages Collection
CREATE INDEX idx_messages_thread_created ON messages(threadId, createdAt ASC)
CREATE INDEX idx_messages_user_thread ON messages(userId, threadId)
-- Projects Collection
CREATE INDEX idx_projects_user_updated ON projects(userId, updatedAt DESC)
-- Message Summaries Collection
CREATE INDEX idx_summaries_thread ON message_summaries(threadId, createdAt ASC)- Navigate to your Appwrite project settings
- Enable Realtime for all collections
- Configure WebSocket connections for live updates
For convenience, you can use the automated setup script:
pnpm run setup-appwriteThis script will create the database structure automatically using your environment configuration.
Start the development server:
pnpm devThe application will be available at http://localhost:3000.
# Development
pnpm dev # Start development server with Turbopack
pnpm build # Build for production
pnpm start # Start production server
pnpm lint # Run ESLint
# Database
pnpm setup-appwrite # Automated database setup
# Version Management
pnpm tag-version # Create version tag
pnpm update-changelog # Update changelog
# Analysis
pnpm build:analyze # Analyze bundle sizeπ System Overview
graph TB
subgraph "Client Layer"
A[User Action] --> B[Local Storage - IndexedDB]
B --> C[Immediate UI Update]
B --> D[Background Sync Queue]
end
subgraph "Sync Layer"
D --> E[Appwrite Cloud Database]
E --> F[Real-time Events]
F --> G[WebSocket Broadcast]
end
subgraph "Multi-Device"
G --> H[Other Devices/Tabs]
H --> I[Local Update]
I --> J[UI Refresh]
end
style A fill:#3b82f6,stroke:#1e40af,color:#fff
style C fill:#10b981,stroke:#059669,color:#fff
style E fill:#f59e0b,stroke:#d97706,color:#fff
style J fill:#10b981,stroke:#059669,color:#fff
β‘ Local-First Operations
| Component | Technology | Performance | Features |
|---|---|---|---|
| Primary Storage | IndexedDB + Dexie.js | Sub-100ms | Structured data, transactions |
| Optimistic Updates | React State + Zustand | Instant | Immediate UI feedback |
| Offline Support | Service Worker | 100% | Full functionality offline |
| Smart Caching | Custom TTL System | Intelligent | Auto-invalidation, compression |
Benefits:
- π Instant Response - No waiting for network requests
- π± Offline First - Works without internet connection
- π Conflict Resolution - Automatic data conflict handling
- πΎ Efficient Storage - Optimized data structures
βοΈ Cloud Synchronization
// Sync Strategy Implementation
const syncStrategy = {
immediate: ["user_actions", "critical_data"],
batched: ["analytics", "preferences"],
background: ["cache_updates", "cleanup"],
realtime: ["messages", "collaboration"],
};Key Features:
- π Background Sync - Non-blocking synchronization with retry logic
- π Cross-Device Sync - Real-time updates across all user devices
- β‘ WebSocket Events - Persistent connections for instant updates
- π Conflict Resolution - Automatic handling of data conflicts
π¦ Bundle Optimization
| Optimization | Result | Technique |
|---|---|---|
| Bundle Size | 106kB | Dynamic imports, code splitting |
| Initial Load | <1.2s | Critical path optimization |
| Tree Shaking | 40% reduction | Unused code elimination |
| Compression | 70% smaller | Gzip + Brotli compression |
// Dynamic Import Example
const LazyComponent = lazy(() =>
import("./HeavyComponent").then((module) => ({
default: module.HeavyComponent,
}))
);β‘ Runtime Performance
Performance Metrics:
- π― First Contentful Paint: <1.2s
- π Largest Contentful Paint: <2.5s
- π Cumulative Layout Shift: <0.1
- β‘ First Input Delay: <100ms
- π Time to Interactive: <3.0s
Optimization Techniques:
- π Optimistic Updates - Immediate UI feedback
- π Virtual Scrolling - Efficient large list rendering
- π§ Memoization - Strategic React.memo and useMemo
- π¦ Lazy Loading - Dynamic imports for heavy components
POST /api/chat-messaging- Main chat endpoint with streaming supportPOST /api/ai-text-generation- Title generation and text enhancementPOST /api/speech-to-text- Voice input transcription using Whisper
POST /api/image-generation- Image generation using OpenRouter with Google Gemini- Context-aware image generation with conversation history
- Model: Gemini 2.5 Flash Image Preview (nano banana)
- Automatic upload to Cloudinary for storage
POST /api/web-search- Intelligent web search with model-driven tool calling- Tool System: AI model automatically selects appropriate tools
- Web Search: Parallel AI multi-query search with Tavily image support
- Retrieval: Exa live website crawling with AI-powered summaries
- Weather: OpenWeather API with comprehensive weather data
- Greeting: Lightweight responses for casual greetings
POST /api/reddit-search- Reddit-specific search functionalityPOST /api/study-mode- Enhanced study mode with PDF parsing and web searchPOST /api/plan-mode- Plan Mode for creating diagrams and MVPs with AI artifacts
POST /api/upload- File upload to CloudinaryPOST /api/files- List user filesDELETE /api/files- Delete user files
POST /api/admin/stats- System statistics and metricsPOST /api/admin/manage-user- User management operationsPOST /api/admin/reset-limits- Reset user credit limitsPOST /api/admin/delete-data- Data cleanup operationsPOST /api/admin/bulk-operations- Bulk administrative tasks
// Request
POST /api/chat-messaging
{
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
],
"model": "Gemini 2.5 Flash",
"conversationStyle": "balanced",
"userId": "user123",
"isGuest": false
}
// Response (Streaming)
data: {"type":"text","text":"Hello! I'm doing well, thank you for asking."}
data: {"type":"finish","usage":{"promptTokens":10,"completionTokens":12}}// Request
POST /api/image-generation
{
"prompt": "A beautiful sunset over mountains",
"conversationHistory": [...], // Optional: for context-aware generation
"userId": "user123"
}
// Response
{
"success": true,
"imageUrl": "https://res.cloudinary.com/...",
"model": "Gemini 2.5 Flash Image Preview"
}interface UserPreferences {
tier: "free" | "premium" | "admin";
freeCredits: number; // Free tier: 80, Premium: 1200
premiumCredits: number; // Free tier: 10, Premium: 600
superPremiumCredits: number; // Free tier: 2, Premium: 50
lastResetDate?: string;
customProfile?: {
customName?: string;
aboutUser?: string;
};
subscription?: {
tier: "FREE" | "PREMIUM";
status: "active" | "cancelled" | "expired";
customerId?: string;
subscriptionId?: string;
currentPeriodEnd?: string;
cancelAtPeriodEnd?: boolean;
};
}interface Thread {
id: string;
userId: string;
title: string;
createdAt: Date;
updatedAt: Date;
lastMessageAt: Date;
isPinned: boolean;
tags: string[];
isBranched: boolean;
projectId?: string;
isShared?: boolean;
shareId?: string;
sharedAt?: Date;
}interface DBMessage {
id: string;
threadId: string;
content: string;
role: "user" | "assistant" | "system" | "data";
createdAt: Date;
webSearchResults?: string[];
webSearchImgs?: string[]; // Web search image URLs
attachments?: FileAttachment[];
model?: string;
imgurl?: string;
isPlan?: boolean; // Indicates Plan Mode message
}interface Project {
id: string;
name: string;
description?: string;
prompt?: string;
colorIndex?: number;
members?: string[]; // Array of user IDs
createdAt: Date;
updatedAt: Date;
}interface PlanArtifact {
id: string;
artifactId: string;
threadId: string;
messageId: string;
userId: string;
type: "mvp" | "diagram";
title: string;
description?: string;
version: number;
// MVP fields
htmlCode?: string;
cssCode?: string;
jsCode?: string;
framework?: "vanilla" | "react" | "svelte" | "vue";
theme?: "light" | "dark";
// Diagram fields
diagramType?: "erd" | "flowchart" | "sequence" | "architecture" | "state_machine" | "user_journey";
diagramCode?: string; // Mermaid syntax
outputFormat?: "mermaid";
sqlSchema?: string;
prismaSchema?: string;
isPublic?: boolean;
createdAt: Date;
updatedAt: Date;
}CappyChat is optimized for deployment on Vercel with automatic builds and deployments.
-
Fork the repository to your GitHub account
-
Connect to Vercel
- Visit vercel.com and sign in
- Import your forked repository
- Vercel will automatically detect the Next.js configuration
-
Configure Environment Variables
- Add all environment variables from your
.env.local - Ensure production URLs are used (not localhost)
- Set
NODE_ENV=production
- Add all environment variables from your
-
Deploy
# Manual deployment vercel --prod # Or push to main branch for automatic deployment git push origin main
Create a Dockerfile in the project root:
FROM node:18-alpine AS base
# Install dependencies only when needed
FROM base AS deps
RUN apk add --no-cache libc6-compat
WORKDIR /app
# Install dependencies based on the preferred package manager
COPY package.json pnpm-lock.yaml* ./
RUN npm install -g pnpm && pnpm install --frozen-lockfile
# Rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm install -g pnpm && pnpm build
# Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE 3000
ENV PORT 3000
ENV HOSTNAME "0.0.0.0"
CMD ["node", "server.js"]Build and run:
docker build -t cappychat .
docker run -p 3000:3000 cappychatDokploy is a modern deployment platform for VPS servers.
-
Install Dokploy on your VPS
curl -sSL https://dokploy.com/install.sh | sh -
Create New Application
- Access Dokploy dashboard at
https://your-vps-ip:3000 - Create new application from GitHub repository
- Select Node.js/Next.js template
- Access Dokploy dashboard at
-
Configure Environment
- Add all environment variables from
.env.local - Set build command:
pnpm build - Set start command:
pnpm start
- Add all environment variables from
-
Deploy
- Dokploy will automatically build and deploy
- SSL certificates are handled automatically
- Monitor logs and metrics from dashboard
Coolify is an open-source alternative to Heroku for VPS.
-
Install Coolify
curl -fsSL https://cdn.coollabs.io/coolify/install.sh | bash -
Setup Application
- Access Coolify at
https://your-vps-ip:8000 - Connect GitHub repository
- Configure as Node.js application
- Access Coolify at
-
Environment Configuration
# Add environment variables in Coolify dashboard NODE_ENV=production NEXT_PUBLIC_API_BASE_URL=https://yourdomain.com # ... other variables
-
Deploy & Monitor
- Automatic deployments on git push
- Built-in monitoring and logging
- SSL certificates via Let's Encrypt
For advanced users who prefer manual setup:
# Install Node.js and pnpm
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejs
npm install -g pnpm pm2
# Clone and setup
git clone https://github.com/CyberBoyAyush/CappyChat.git
cd CappyChat
pnpm install
pnpm build
# Start with PM2
pm2 start ecosystem.config.js
pm2 startup
pm2 saveNetlify
- Set build command:
pnpm build - Set publish directory:
.next - Add environment variables in Netlify dashboard
Railway
- Connect GitHub repository
- Railway auto-detects Next.js configuration
- Add environment variables in Railway dashboard
DigitalOcean App Platform
- One-click deployment from GitHub
- Automatic scaling and SSL
- Built-in monitoring and alerts
Update these variables for production deployment:
# Production URLs
NEXT_PUBLIC_AUTH_SUCCESS_URL=https://yourdomain.com/auth/callback
NEXT_PUBLIC_AUTH_FAILURE_URL=https://yourdomain.com/auth/error
NEXT_PUBLIC_VERIFICATION_URL=https://yourdomain.com/auth/verify
NEXT_PUBLIC_API_BASE_URL=https://yourdomain.com
# Production settings
NODE_ENV=production
# Security
ADMIN_SECRET_KEY=your-production-admin-key- CDN: Vercel/Cloudflare automatically provides global CDN
- Caching: Configure appropriate cache headers
- Monitoring: Set up error tracking and performance monitoring
- SSL: Automatic HTTPS with most platforms or configure SSL certificates
- Load Balancing: Use nginx or platform load balancers for high traffic
# Nginx configuration for VPS deployment
server {
listen 80;
server_name yourdomain.com;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}Create ecosystem.config.js for PM2 process management:
module.exports = {
apps: [
{
name: "cappychat",
script: "server.js",
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "production",
PORT: 3000,
},
error_file: "./logs/err.log",
out_file: "./logs/out.log",
log_file: "./logs/combined.log",
time: true,
},
],
};CappyChat/
βββ app/ # Next.js App Router
β βββ api/ # Backend API routes
β β βββ admin/ # Admin endpoints
β β βββ chat-messaging/ # Main chat API
β β βββ image-generation/ # Image generation
β β βββ speech-to-text/ # Voice input
β β βββ upload/ # File uploads
β β βββ web-search/ # Search functionality
β βββ layout.tsx # Root layout
β βββ static-app-shell/ # SPA shell
βββ frontend/ # React application
β βββ components/ # Reusable UI components
β βββ contexts/ # React contexts
β βββ hooks/ # Custom hooks
β βββ routes/ # Page components
β βββ stores/ # Zustand stores
βββ lib/ # Core utilities
β βββ appwrite*.ts # Appwrite services
β βββ hybridDB.ts # Local-first database
β βββ models.ts # AI model configurations
β βββ tierSystem.ts # User tier management
β βββ streamingSync.ts # Real-time sync
βββ docs/ # Documentation
βββ public/ # Static assets
βββ scripts/ # Build and utility scripts
- Strict Mode: Full TypeScript strict mode enabled
- Type Safety: Comprehensive type definitions for all APIs
- Interface Design: Clear interfaces for all data structures
- Generic Types: Reusable generic types for common patterns
// Functional components with TypeScript
interface ChatMessageProps {
message: DBMessage;
onEdit?: (messageId: string) => void;
onDelete?: (messageId: string) => void;
}
const ChatMessage: React.FC<ChatMessageProps> = ({
message,
onEdit,
onDelete,
}) => {
// Component implementation
};// Zustand store pattern
interface ChatStore {
messages: DBMessage[];
activeThread: string | null;
addMessage: (message: DBMessage) => void;
setActiveThread: (threadId: string) => void;
}
const useChatStore = create<ChatStore>((set) => ({
messages: [],
activeThread: null,
addMessage: (message) =>
set((state) => ({
messages: [...state.messages, message],
})),
setActiveThread: (threadId) => set({ activeThread: threadId }),
}));// Always prioritize local operations
const optimisticUpdate = async (data: any) => {
// 1. Update local state immediately
updateLocalState(data);
// 2. Sync to cloud in background
try {
await syncToCloud(data);
} catch (error) {
handleSyncError(error);
}
};- Dynamic Imports: Use for heavy components
- Code Splitting: Automatic route-based splitting
- Tree Shaking: Eliminate unused code
- Package Optimization: Selective imports from large libraries
- Multi-level Caching: Browser, service worker, application
- Smart Invalidation: Automatic cache invalidation
- Progressive Loading: Critical data first
import { render, screen } from "@testing-library/react";
import { ChatMessage } from "./ChatMessage";
describe("ChatMessage", () => {
it("renders message content correctly", () => {
const message = {
id: "1",
content: "Hello world",
role: "user" as const,
createdAt: new Date(),
};
render(<ChatMessage message={message} />);
expect(screen.getByText("Hello world")).toBeInTheDocument();
});
});- API Testing: Test all API endpoints
- Database Testing: Test database operations
- Real-time Testing: Test WebSocket connections
- Performance Testing: Test load and response times
We welcome contributions from developers of all skill levels! Whether you're fixing bugs, adding features, or improving documentation, every contribution helps make CappyChat better.
- Fork the repository on GitHub
- Clone your fork locally
git clone https://github.com/your-username/CappyChat.git cd CappyChat - Create a feature branch
git checkout -b feature/your-feature-name
- Install dependencies
pnpm install
- Set up your development environment (see Getting Started)
feature/description- New featuresbugfix/description- Bug fixesdocs/description- Documentation updatesrefactor/description- Code refactoringtest/description- Test improvements
Follow Conventional Commits:
feat: add voice input support for mobile devices
fix: resolve real-time sync issue with large messages
docs: update API documentation for image generation
refactor: optimize database query performance
test: add unit tests for chat message component- TypeScript: All code must be properly typed
- ESLint: Follow the project's linting rules
- Prettier: Use consistent code formatting
- Testing: Add tests for new features and bug fixes
-
Ensure your code follows the project standards
pnpm lint pnpm build
-
Write clear commit messages following conventional commits
-
Update documentation if you're adding new features
-
Add tests for new functionality
-
Create a pull request with:
- Clear title and description
- Reference to related issues
- Screenshots for UI changes
- Testing instructions
- Testing: Expand test coverage for components and APIs
- Performance: Optimize bundle size and runtime performance
- Accessibility: Improve WCAG compliance and keyboard navigation
- Mobile: Enhance mobile user experience
- Documentation: Improve API documentation and guides
- Internationalization: Multi-language support
- Themes: Additional theme options and customization
- Integrations: New AI model integrations
- Collaboration: Enhanced team collaboration features
- Analytics: Usage analytics and insights
- Be respectful and inclusive in all interactions
- Provide constructive feedback in code reviews
- Help newcomers get started with the project
- Follow the project's coding standards and conventions
- Report issues clearly with reproduction steps
π¨ Environment Setup Issues
β Application fails to start
# Check Node.js version
node --version # Should be 18+
# Clear and reinstall dependencies
rm -rf node_modules .next
pnpm install
# Verify environment variables
cat .env.local | grep -v "^#"β Solutions:
- βοΈ Verify all required environment variables are set
- βοΈ Check Appwrite project configuration
- βοΈ Ensure Node.js version is 18 or higher
- βοΈ Clear cache and reinstall dependencies
ποΈ Database Connection Issues
β Database operations fail
// Debug database connection
const testConnection = async () => {
try {
const result = await databases.listDocuments("main", "threads");
console.log("β
Database connected:", result);
} catch (error) {
console.error("β Database error:", error);
}
};β Solutions:
- βοΈ Verify Appwrite endpoint and project ID
- βοΈ Check API key permissions (databases.read, databases.write, users.read)
- βοΈ Ensure collections exist with correct names and schemas
- βοΈ Verify user authentication status
π Real-time Sync Issues
β Messages don't sync across devices
// Debug WebSocket connection
const debugRealtime = () => {
console.log("WebSocket state:", client.realtime.connection.state);
client.realtime.subscribe(
"databases.main.collections.messages.documents",
(response) => console.log("Realtime event:", response)
);
};β Solutions:
- βοΈ Check Appwrite Realtime is enabled
- βοΈ Verify WebSocket connections in browser dev tools
- βοΈ Check user permissions for collections
- βοΈ Clear local storage:
localStorage.clear()
β‘ Performance Issues
β Slow loading or high memory usage
# Analyze bundle size
pnpm build:analyze
# Check performance metrics
pnpm lighthouse
# Monitor memory usage
# Open Chrome DevTools > Performance tabβ Solutions:
- βοΈ Check bundle size and optimize imports
- βοΈ Verify CDN and caching configuration
- βοΈ Monitor database query performance
- βοΈ Check for memory leaks in React components
| π Bug Reports | GitHub Issues |
| π¬ Discussions | Community Q&A |
| π Documentation | Comprehensive Docs |
| π§ Direct Support | connect@ayush-sharma.in |
|
Perfect for trying out CappyChat
|
For power users and professionals
|
π‘ Credit Usage:
- Free Models (1 credit): Gemini 2.5 Flash Lite, OpenAI 5 Mini
- Premium Models (1 credit): GPT-5, Claude 4, Grok 4
- Super Premium Models (1 credit): DeepSeek R1, Qwen3 Max
- Image Generation Models (10 credits): Gemini Nano Banana
π Upgrade to Premium β’ π View All Models
GNU General Public License v3.0 - See the LICENSE file for details
This project is licensed under GPL v3 with additional restrictions for commercial use
|
AI Models |
Backend Infrastructure |
Deployment & Hosting |
Framework |
Language |
π Special Recognition:
- Open Source Community for the incredible tools and libraries
- Beta Testers who provided valuable feedback and bug reports
- Contributors who help make CappyChat better every day
- Users who trust CappyChat for their AI conversations
This project is licensed under GPL v3 with additional restrictions:
- β Personal Use: Free to use, modify, and distribute
- β Open Source Projects: Compatible with GPL-compatible licenses
β οΈ Commercial Use: Requires explicit permission from copyright holders- π Modifications: Must be disclosed and properly attributed
- π Network Deployment: Modified versions require source code disclosure
For commercial licensing or permission requests, contact the copyright holders.