Self-hosted AI QA Agent using Ollama + n8n (or Node.js) + Slack.
# Install dependencies
npm install
# Configure (copy .env.example to .env and fill in values)
cp .env.example .env
# Start the server
npm startOLLAMA_URL=http://localhost:11434
OLLAMA_MODEL=qwen3:8b
PORT=3000
# Slack (optional)
SLACK_WEBHOOK_URL= # Get from Slack app configuration| Endpoint | Method | Description |
|---|---|---|
/ |
GET | Health check |
/health |
GET | Health status |
/webhook |
POST | Generic webhook (send {"message": "your question"}) |
/slack |
POST | Slack events endpoint |
# Test with curl
curl -X POST http://localhost:3000/webhook \
-H "Content-Type: application/json" \
-d '{"message":"What open bugs do we have?"}'- Go to https://api.slack.com/apps
- Create new app → "From scratch"
- Select your workspace
- Go to Incoming Webhooks → Activate
- "Add New Webhook to Workspace" → Select channel
- Copy webhook URL → Add to
.env
Mock data in data/ folder:
mock_jira.json- Simulated Jira issuesmock_confluence.json- Simulated Confluence pages
Slack → Webhook → QA Agent (Node.js) → Ollama (qwen3:8b)
↓
Mock Data (Jira/Confluence)
- Replace mock data with real Jira/Confluence APIs
- Add n8n as orchestrator (see docker-compose.yml)
- Deploy to server