TechBrief is an autonomous research and information synthesis system designed for AI backend engineers. It aggregates tech news daily, synthesizes insights using local AI models, and provides a REST API for accessing research findings.
β¨ Daily Automated Research with Skill Rotation
- Automatically runs at scheduled times (default: 09:00 AM)
- Rotates through 15+ technical skills (FastAPI, Kubernetes, Docker, etc.)
- Each day focuses on one skill for targeted research
- Aggregates tech news from multiple sources (Hacker News, Medium, Dev.to)
- Stores findings in PostgreSQL database
π¨ Slack Notifications
- Daily skill-focused reports sent directly to Slack
- Beautiful formatted messages with top articles
- Configure with your Slack webhook URL
- Automatic report delivery each morning
π€ Local AI Synthesis
- Uses Ollama with Mistral/Llama2 for local model inference
- Generates intelligent summaries from articles
- Extracts key technical keywords
- No external API calls - fully private
π³ Docker-based Self-Serve
- Everything containerized and ready to deploy
- Single command startup with docker-compose
- Pre-configured PostgreSQL + Ollama + FastAPI backend
- Hot-reload development mode
π REST API
- Browse aggregated articles
- View AI-generated summaries
- Get research statistics
- Trigger research manually
- Health check endpoints
See QUICKSTART.md for fastest setup!
# Clone and setup
cp .env.example .env
# Start all services
docker-compose up -d
# Pull AI model (5-10 min)
docker exec techbrief_ollama ollama pull mistral
# Check health
curl http://localhost:8000/api/research/health
# Access API
open http://localhost:8000/docsOr use the CLI tool:
./cli.py init
./cli.py health
./cli.py articlesβββββββββββββββββββ
β Hacker News β
β Medium β β Aggregated daily
β Dev.to β
ββββββββββ¬βββββββββ
β
ββββββΌβββββββββββββββββ
β FastAPI Backend β
β (src/main.py) β
ββββββ¬ββββββββββ¬βββββββ
β β
ββββββΌβββ βββββΌβββββββββ
βOllama β β PostgreSQL β
β(Local)β β (Database) β
βββββββββ ββββββββββββββ
β
ββββββΌβββββββββββ
β REST API β
β (localhost: β
β 8000) β
βββββββββββββββββ
# Get latest articles
GET /api/research/articles?limit=20
# Get processed articles with AI summaries
GET /api/research/articles/processed
# Get today's articles
GET /api/research/articles/today
# Get specific article
GET /api/research/articles/{id}
# Get research statistics
GET /api/research/stats
# Trigger research manually
POST /api/research/run-research
# Send test Slack report
POST /api/research/send-test-slack?skill=FastAPI
# Get research sessions/logs
GET /api/research/sessions# Health check
GET /api/research/health
# Application status
GET /status
# Root info
GET /Full API documentation: http://localhost:8000/docs (Swagger UI)
Edit .env to customize:
# Database
DB_NAME=techbrief_db
DB_USER=postgres
DB_PASSWORD=postgres123
# AI Model
OLLAMA_BASE_URL=http://ollama:11434
OLLAMA_MODEL=mistral # Change to: llama2, neural-chat, tinyllama, etc.
# Daily Schedule (24-hour format)
RESEARCH_SCHEDULE_HOUR=09
RESEARCH_SCHEDULE_MINUTE=00
# Slack Configuration
SLACK_ENABLED=true
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK
SLACK_CHANNEL=#techbrief
# Logging
LOG_LEVEL=INFO
DEBUG=False| Variable | Default | Purpose |
|---|---|---|
DB_NAME |
techbrief_db | Database name |
DB_USER |
postgres | DB username |
DB_PASSWORD |
postgres123 | DB password |
OLLAMA_BASE_URL |
http://ollama:11434 | Ollama server address |
OLLAMA_MODEL |
mistral | AI model to use |
RESEARCH_SCHEDULE_HOUR |
09 | Daily run hour |
RESEARCH_SCHEDULE_MINUTE |
00 | Daily run minute |
SLACK_ENABLED |
false | Enable/disable Slack notifications |
SLACK_WEBHOOK_URL |
- | Slack webhook URL for sending messages |
SLACK_CHANNEL |
#techbrief | Slack channel for reports |
LOG_LEVEL |
INFO | Logging verbosity |
To enable daily Slack reports for skill-focused research:
1. Create a Slack App:
- Go to api.slack.com/apps
- Click "Create New App" β "From scratch"
- Give your app a name (e.g., "TechBrief")
- Select your workspace
2. Enable Incoming Webhooks:
- In your app settings, go to "Incoming Webhooks"
- Toggle "Activate Incoming Webhooks" to ON
- Click "Add New Webhook to Workspace"
- Select the channel where you want reports (e.g., #techbrief)
- Click "Allow"
3. Configure in .env:
SLACK_ENABLED=true
SLACK_WEBHOOK_URL=https://hooks.slack.com/services/YOUR/WEBHOOK/URL
SLACK_CHANNEL=#techbrief4. Test the integration:
# Via API
curl -X POST "http://localhost:8000/api/research/send-test-slack?skill=FastAPI"
# Via CLI
./cli.py test-slack KubernetesEvery day, the system selects one technical skill to focus on:
- 15+ Skills: FastAPI, Kubernetes, Docker, PostgreSQL, Redis, GraphQL, Microservices, AWS, GCP, Azure, Terraform, CI/CD, Monitoring, Optimization, Security
- Rotation: Daily schedule cycles through skills (same day = same skill each year)
- Focus: Articles are filtered for relevance to today's skill
Every day at configured time:
- Fetches RSS feeds from HackerNews, Medium, Dev.to
- Filters articles by today's skill
- Stores new articles in PostgreSQL
- Deduplicates to avoid re-processing
- Collects ~15-25 articles daily
- Fetches article content via web scraping
- Generates 2-3 sentence summaries using Ollama
- Extracts 3-5 technical keywords
- Stores results in database
- One article processing: 2-5 seconds (depends on model)
If Slack is enabled:
- Formats top articles with summaries
- Includes today's skill focus
- Sends beautiful message to configured Slack channel
- Happens immediately after article processing completes
- Query articles by date, source, keywords
- View full summaries and metadata
- Get aggregate statistics
- Manual research trigger
curl http://localhost:8000/api/research/articles/today | jq '.[] | {title, summary: .ai_summary_short, keywords}'{
"id": 1,
"source": "hacker_news",
"title": "Building Production-Ready Technical Systems",
"url": "https://news.ycombinator.com/item?id=39123456",
"ai_summary_short": "Article discusses best practices for deploying ML models in production, covering containerization, monitoring, and scaling strategies...",
"keywords": "backend, ML, deployment, infrastructure",
"relevance_score": 85,
"created_at": "2024-01-15T09:05:00Z",
"processed_at": "2024-01-15T09:12:30Z"
}curl http://localhost:8000/api/research/stats | jq .Response:
{
"total_articles": 156,
"summarized_count": 145,
"average_relevance_score": 72.5,
"top_keywords": [
"backend",
"distributed-systems",
"kubernetes",
"microservices",
"performance"
],
"latest_session_date": "2024-01-15T09:15:30Z",
"articles_today": 12
}curl -X POST http://localhost:8000/api/research/run-research
# Response:
{
"message": "Research execution started",
"session_id": 42,
"status": "running"
}Use cli.py for easy management:
chmod +x cli.py
# First time setup
./cli.py init
# Start/Stop services
./cli.py start
./cli.py stop
# Run research
./cli.py research
# View data
./cli.py articles
./cli.py stats
./cli.py health
# Logs
./cli.py logs backend
./cli.py logs postgres
./cli.py logs ollama
# Model management
./cli.py list-models
./cli.py pull-model llama2
# Database access
./cli.py shell-db
# Slack integration
./cli.py test-slack # Test with default skill (FastAPI)
./cli.py test-slack Kubernetes # Test with specific skill
# Cleanup
./cli.py clean# Backend
docker-compose logs -f backend
# Database
docker-compose logs -f postgres
# Model
docker-compose logs -f ollamadocker exec -it techbrief_db psql -U postgres -d techbrief_db
# View articles
SELECT id, title, ai_summary_short FROM research_articles LIMIT 5;
# View sessions
SELECT * FROM research_sessions ORDER BY session_date DESC;# List models
docker exec techbrief_ollama ollama list
# View model details
docker exec techbrief_ollama ollama show mistraldocker-compose downdocker-compose down -v# 1. Check health
curl http://localhost:8000/api/research/health
# 2. Get initial state
curl http://localhost:8000/api/research/stats | jq '.total_articles'
# 3. Trigger research
curl -X POST http://localhost:8000/api/research/run-research
# 4. Wait 1-2 minutes for processing...
# 5. Check results
curl http://localhost:8000/api/research/articles/today | jq 'length'
# 6. View statistics
curl http://localhost:8000/api/research/stats | jq '.summarized_count'# Check container status
docker ps | grep ollama
# View logs
docker logs techbrief_ollama
# Restart
docker-compose restart ollama
# Test health
curl http://localhost:11434/api/tags# Check PostgreSQL
docker logs techbrief_db
# Verify connection
docker exec techbrief_db pg_isready
# Restart
docker-compose restart postgres# Check backend logs
docker-compose logs backend | tail -50
# Manually trigger research
curl -X POST http://localhost:8000/api/research/run-research
# Monitor processing
docker-compose logs -f backend
# Check session status
curl http://localhost:8000/api/research/sessions | jq '.[0]'Try a smaller model:
# Remove current model
docker exec techbrief_ollama ollama rm mistral
# List available models
./cli.py list-models
# Pull smaller model
./cli.py pull-model tinyllamaUpdate .env:
OLLAMA_MODEL=tinyllamaRestart backend:
docker-compose restart backend# Check Docker disk usage
docker system df
# Clean unused images
docker image prune -a
# Remove all containers and data
docker-compose down -v
docker system prune -aTechBrief/
βββ src/
β βββ main.py # FastAPI app entry point
β βββ config.py # Configuration management
β βββ models/
β β βββ database_models.py # SQLAlchemy ORM models
β β βββ database.py # DB connection & session
β β βββ schemas.py # Pydantic API schemas
β βββ services/
β β βββ ollama_service.py # AI model integration
β β βββ news_aggregator.py # News fetching & processing
β β βββ slack_service.py # Slack webhook integration
β β βββ skills.py # Skill rotation system
β βββ api/
β β βββ routes.py # FastAPI routes/endpoints
β βββ schedulers/
β βββ daily_research.py # APScheduler jobs
β
βββ docker-compose.yml # Container orchestration
βββ Dockerfile # Backend image definition
βββ requirements.txt # Python dependencies
βββ .env.example # Configuration template
βββ cli.py # Management CLI tool
βββ QUICKSTART.md # 5-minute setup guide
βββ README.md # This file
CREATE TABLE research_articles (
id INT PRIMARY KEY,
source VARCHAR(50), -- hacker_news, medium, dev_to
title VARCHAR(500),
url VARCHAR(1000) UNIQUE,
original_content TEXT, -- Full article text
ai_summary TEXT, -- Full AI summary
ai_summary_short VARCHAR(500),-- Short version for API
keywords VARCHAR(500), -- Comma-separated
relevance_score INT, -- 0-100
is_relevant BOOLEAN,
created_at TIMESTAMP,
published_at TIMESTAMP,
processed_at TIMESTAMP -- When AI processed it
);CREATE TABLE research_sessions (
id INT PRIMARY KEY,
session_date TIMESTAMP,
articles_collected INT, -- How many fetched
articles_summarized INT, -- How many processed
execution_time_seconds INT, -- Total time
status VARCHAR(20), -- pending, running, completed, failed
error_message TEXT,
skill_focus VARCHAR(100) -- Today's skill (e.g., 'FastAPI', 'Kubernetes')
);Edit src/services/news_aggregator.py:
@staticmethod
async def fetch_from_my_source(db: Session, limit: int = 10) -> List[ResearchArticle]:
"""Fetch from my custom source"""
articles = []
try:
# Your aggregation logic
response = requests.get("https://my-source.com/news")
data = response.json()
for item in data:
existing = db.query(ResearchArticle).filter(
ResearchArticle.url == item['url']
).first()
if not existing:
article = ResearchArticle(
source="my_source",
title=item['title'],
url=item['url'],
)
articles.append(article)
except Exception as e:
logger.error(f"Error: {e}")
return articles
# Add to aggregate_daily()
my_articles = await NewsAggregator.fetch_from_my_source(db)
articles_to_add.extend(my_articles)Edit src/services/news_aggregator.py in process_articles_with_ai():
# Change summarization prompt
prompt = f"""Please provide a 1 sentence summary focusing on backend implications:
Text: {text}
Summary:"""
summary = ollama_service.summarize_sync(content, prompt=prompt)Or edit src/services/ollama_service.py to customize prompts globally.
Edit .env:
RESEARCH_SCHEDULE_HOUR=22 # 10 PM
RESEARCH_SCHEDULE_MINUTE=30Restart:
docker-compose restart backendMIT License - Free to use and modify.
- β
Setup:
./cli.py initor follow QUICKSTART.md - π Learn: Check API docs at http://localhost:8000/docs
- π Browse: View research results via API or CLI
- βοΈ Customize: Edit
.envand sources insrc/services/ - π Monitor: Check dashboard stats and logs
TechBrief - Your autonomous AI research assistant, running locally. π§ β¨
For questions or issues, check the troubleshooting section above or review the relevant source files.