Search papers Β· Extract insights Β· Deliver daily digests Β· All for free
- What This Does
- What is NanoClaw?
- How It Works
- Prerequisites
- Installing NanoClaw Dependencies
- Getting API Keys
- Installation
- Configuration
- Running the Agent
- Using the Telegram Bot
- πΈ Screenshots
- Project Structure
- Customization
- Troubleshooting
- SysAdmin Best Practices
- FAQ
This project is a NanoClaw-powered fully autonomous AI research assistant that runs 24/7 on your Linux machine β even a low-spec laptop. It:
- Searches arXiv for the latest AI/ML papers on any topic you request
- Summarizes each paper using a free cloud LLM (via OpenRouter), extracting key ideas and limitations
- Delivers results to Telegram via a clean, structured bot interface
- Sends a daily 7AM digest automatically β no interaction required
- Runs as a hardened systemd service with auto-restart on failure
- Isolated by NanoClaw inside a Docker container for security and reproducibility
π‘ Zero cost to run. Uses the OpenRouter free tier (no credit card needed) and the arXiv public API.
NanoClaw is an open-source, lightweight personal AI agent framework designed to give you full control and ownership of your AI assistant. This repo uses NanoClaw as its runtime environment.
| Feature | What It Means For You |
|---|---|
| Minimal footprint | A single Node.js process β no bloated framework overhead |
| Container isolation | Agent sessions run inside Docker containers for filesystem-level security |
| Multi-platform messaging | Native support for Telegram, WhatsApp, Slack, Discord, and Signal |
| Skill system | Extend the agent's behavior by writing skills in plain text |
| Claude Agent SDK | Powered by Anthropic's Claude under the hood |
| AI-native setup | The /setup skill command guides you through the entire configuration interactively |
[NanoClaw Runtime] βββββ Node.js 20+ process
β
βββ Docker Container βββββ Isolated agent session
β β
β βββ Python venv βββββ agent.py + scheduler.py
β
βββ Claude Agent SDK βββββ Skill execution, LLM calls
NanoClaw handles the container lifecycle, messaging platform integration, and skill routing. The Python scripts (agent.py, scheduler.py) handle the arXiv search and OpenRouter summarization logic.
You (Telegram)
β
βΌ
[NanoClaw Runtime] βββββ Node.js 20+ / Claude Agent SDK
β
βΌ
[Docker Container] βββββ Isolated agent session (filesystem security)
β
βΌ
[Telegram Bot] βββββ python-telegram-bot library
β
βΌ
[agent.py] ββββββΊ arXiv API βββΊ Latest papers (title, abstract, PDF)
β
βΌ
[LangChain + OpenRouter] βββΊ Free Cloud LLM (Qwen, etc.)
β
βΌ
Structured JSON: { key_ideas, limitations }
β
βΌ
Formatted Telegram Message βββββΊ You π±
βββββββββββββββββββββββββββββββββββββββββ
[scheduler.py] ββββββΊ Checks time every 60s
β
βΌ (at 07:00 daily)
Calls search_papers() + extract_insights()
β
βΌ
Sends digest directly to your TELEGRAM_CHAT_ID
Two services run in parallel:
agent.pyβ Responds to your/researchcommands interactivelyscheduler.pyβ Sends an automated 7AM digest without any input from you
Before you begin, make sure you have the following installed:
| Requirement | Version | Purpose | Check Command |
|---|---|---|---|
| Python | 3.12+ | Runs agent.py & scheduler.py | python3 --version |
| pip | Latest | Installs Python packages | pip --version |
| git | Any | Cloning the repo | git --version |
| systemd | Any | Production service management | systemctl --version |
| Node.js | 20+ | NanoClaw runtime | node --version |
| npm | 10+ | NanoClaw package manager | npm --version |
| Docker | Any | NanoClaw container isolation | docker --version |
| Claude Code | Latest | NanoClaw setup & skill runner | claude --version |
Install Python 3.12+ if needed:
# Debian / Ubuntu
sudo apt update && sudo apt install python3.12 python3.12-venv python3-pip -y
# Arch / EndeavourOS
sudo pacman -S python python-pip
# Fedora
sudo dnf install python3.12 python3-pipNanoClaw requires Node.js 20+, Docker, and the Claude Code CLI. Install them in this order.
NanoClaw runs on a single Node.js process. Version 20 or later is required (22+ recommended).
# ββ Debian / Ubuntu ββββββββββββββββββββββββββββββββββ
# Using NodeSource (official recommended method)
curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash -
sudo apt install -y nodejs
# Verify
node --version # Should show v22.x.x or higher
npm --version # Should show 10.x.x or higher# ββ Arch / EndeavourOS βββββββββββββββββββββββββββββββ
sudo pacman -S nodejs npm
# Verify
node --version
npm --version# ββ Fedora βββββββββββββββββββββββββββββββββββββββββββ
sudo dnf module install nodejs:22
# Verify
node --version
npm --version# ββ Any Linux (using nvm β recommended for flexibility) ββ
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
source ~/.bashrc # or ~/.zshrc
nvm install 22
nvm use 22
# Verify
node --versionπ‘ Using nvm (Node Version Manager) is the most flexible approach β it lets you switch Node versions without affecting system packages.
NanoClaw uses Docker to run agent sessions in isolated containers, preventing the agent from accessing unintended parts of your filesystem.
# ββ Debian / Ubuntu ββββββββββββββββββββββββββββββββββ
# Install Docker Engine (official script)
curl -fsSL https://get.docker.com | sudo sh
# Add your user to the docker group (so you don't need sudo)
sudo usermod -aG docker $USER
newgrp docker # Apply group change immediately
# Start and enable Docker service
sudo systemctl enable --now docker
# Verify
docker --version
docker run hello-world # Should pull and run a test container# ββ Arch / EndeavourOS βββββββββββββββββββββββββββββββ
sudo pacman -S docker
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker --version# ββ Fedora βββββββββββββββββββββββββββββββββββββββββββ
sudo dnf install docker
sudo systemctl enable --now docker
sudo usermod -aG docker $USER
newgrp docker
# Verify
docker --version
β οΈ Important: After adding yourself to thedockergroup, you may need to log out and log back in for the change to fully apply. Thenewgrp dockercommand is a temporary workaround for the current session.
Claude Code is the CLI used to manage NanoClaw agents, run skills (like /setup), and interact with the Claude Agent SDK.
Prerequisites for Claude Code:
- Node.js 20+ (installed above)
- An Anthropic account (free to sign up)
# Install Claude Code globally via npm
npm install -g @anthropic-ai/claude-code
# Verify installation
claude --versionAuthenticate Claude Code:
# Start the authentication flow
claude
# Follow the prompts to log in with your Anthropic account
# or set your API key:
export ANTHROPIC_API_KEY=sk-ant-your-key-hereπ‘ Free Tier: Claude Code has a free usage tier. You do not need a paid Anthropic plan to use NanoClaw's basic features.
Verify NanoClaw can run:
# After cloning this repo, run the NanoClaw setup skill
# (This guides you through container setup interactively)
claude
# Then inside claude: /setupYou need 3 things before configuring the project. All are free.
OpenRouter gives you access to many LLMs (including Qwen, Mistral, Llama) without needing a paid subscription.
Steps:
- Go to https://openrouter.ai
- Click Sign Up (you can use Google or GitHub)
- Navigate to API Keys in your dashboard
- Click Create Key β give it a name like
ai-research-agent - Copy the key β it looks like:
sk-or-v1-xxxxxxxxxxxxxxxxxxxx
β οΈ Free tier limits: The free models (e.g.qwen/qwen3.6-plus:free) have rate limits. This agent is designed to work within those limits by adding delays between requests.
Steps:
- Open Telegram and search for
@BotFather - Start a chat and send:
/newbot - Choose a name for your bot (e.g.
My Research Bot) - Choose a username β must end in
bot(e.g.myresearch_bot) - BotFather will reply with your token β looks like:
123456789:AAFxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx - Copy and save this token
The scheduler needs to know which chat to send the daily digest to.
Steps:
- Start a conversation with your new bot on Telegram (send
/start) - Now visit this URL in your browser (replace
YOUR_TOKENwith your bot token):https://api.telegram.org/botYOUR_TOKEN/getUpdates - Look for
"chat":{"id":in the JSON response - The number after
"id":is your Chat ID (e.g.987654321)
π‘ Tip: If the response is empty, send your bot any message first, then reload the URL.
git clone https://github.com/bmo1177/ai-research-agent.git
cd ai-research-agentThe setup.sh script handles everything automatically:
chmod +x setup.sh
./setup.shThis script will:
- β
Create a Python virtual environment (
venv/) - β
Upgrade
pip,setuptools, andwheel - β
Install all dependencies from
requirements.txt - β
Copy
.env.exampleβ.env(if.envdoesn't exist yet) - β
Lock
.envpermissions tochmod 600(owner-only read/write)
Expected output:
π Setting up AI Research Agent...
π¦ Creating virtual environment...
π₯ Installing Python dependencies...
βοΈ Creating .env from template...
π Please edit .env with your API keys before continuing!
β
Dependencies installed.
π Next steps:
1. Edit .env with your OpenRouter & Telegram credentials
2. Run: systemctl --user daemon-reexec
3. Enable services: systemctl --user enable ai-research.service ai-research-scheduler.service
4. Start: systemctl --user start ai-research.service ai-research-scheduler.service
Open .env in your editor:
nano .env
# or
vim .envFill in your credentials:
# ========================
# OpenRouter API (Free Tier)
# ========================
OPENROUTER_API_KEY=sk-or-v1-your-actual-key-here
OPENAI_BASE_URL=https://openrouter.ai/api/v1
DEFAULT_MODEL=qwen/qwen3.6-plus:free
# ========================
# Telegram Bot Configuration
# ========================
TELEGRAM_BOT_TOKEN=123456789:AAFxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
TELEGRAM_CHAT_ID=987654321Save and secure the file:
chmod 600 .envTo use a different free model from OpenRouter, update DEFAULT_MODEL. Some options:
| Model | Value | Notes |
|---|---|---|
| Qwen 3.6 Plus (default) | qwen/qwen3.6-plus:free |
Fast, good quality |
| Mistral 7B | mistralai/mistral-7b-instruct:free |
Reliable |
| Llama 3 8B | meta-llama/llama-3-8b-instruct:free |
Open-source |
| Gemma 3 | google/gemma-3-27b-it:free |
Google's model |
Browse all free models at openrouter.ai/models.
There are three ways to run this project. Start with Option A to test everything works, then move to Option C for production.
Run the bot directly in your terminal β good for verifying credentials work.
# Activate the virtual environment
source venv/bin/activate
# Start the bot
python agent.pyYou should see:
β
AI Research Agent started! Model: qwen/qwen3.6-plus:free
Open Telegram, send /start to your bot, and test with /research transformers.
To stop: Ctrl + C
Run the bot detached from your terminal β it keeps running after you close the SSH session or terminal.
source venv/bin/activate
# Start agent in background
nohup python agent.py > logs/agent.log 2>&1 &
echo "Agent PID: $!"
# Start scheduler in background
nohup python scheduler.py > logs/scheduler.log 2>&1 &
echo "Scheduler PID: $!"Check logs:
tail -f logs/agent.log
tail -f logs/scheduler.logStop the processes:
pkill -f "python agent.py"
pkill -f "python scheduler.py"
β οΈ These will stop if the machine reboots. Use Option C for true persistence.
This runs the agent as a proper Linux service β starts on boot, restarts on crash, logs to journald.
mkdir -p ~/.config/systemd/user
ln -sf "$(pwd)/systemd/ai-research.service" ~/.config/systemd/user/ai-research.service
ln -sf "$(pwd)/systemd/ai-research-scheduler.service" ~/.config/systemd/user/ai-research-scheduler.servicesystemctl --user daemon-reexec
systemctl --user daemon-reload
systemctl --user enable ai-research.service
systemctl --user enable ai-research-scheduler.servicesystemctl --user start ai-research.service
systemctl --user start ai-research-scheduler.servicesystemctl --user status ai-research.service
systemctl --user status ai-research-scheduler.serviceYou should see Active: active (running) in green.
By default, user services only run after you log in. To make them start at boot (even on a headless server):
sudo loginctl enable-linger $USERThis is required if you're running on a VPS or server where you don't interactively log in.
Once the agent is running, open Telegram and chat with your bot.
| Command | Description | Example |
|---|---|---|
/start |
Initialize the bot and show help | /start |
/help |
Show available commands | /help |
/research [topic] |
Search arXiv and summarize recent papers | /research mixture of experts |
/daily |
Manually trigger today's research digest | /daily |
You: /research vision transformers
Bot: π Searching arXiv for: vision transformers
β³ Please wait 30-60 seconds...
Bot: π Analyzing 1/4...
Bot: π Efficient Vision Transformer with Sparse Attention
π₯ Authors: Li Wei, Zhang Hui, Chen Ming
π arXiv: 2401.12345
π
Published: 2024-01-15
π Key Ideas:
β’ Sparse attention mechanism reduces quadratic complexity
β’ Achieves 94.2% accuracy on ImageNet with 40% less compute
β’ Compatible with standard ViT architectures
β οΈ Limitations:
β’ Only tested on image classification, not detection
β’ Requires specific hardware for sparse ops
π PDF Link
---
The /research command accepts any topic string:
/research reinforcement learning from human feedback
/research protein structure prediction
/research energy-efficient neural networks
/research federated learning privacy
/research code generation LLM
The agent builds an arXiv query targeting cs.AI and cs.LG categories, sorted by newest submissions.
Real screenshots of the bot running in Telegram:
|
|
|
|
|
|
π‘ Tip: Images are loaded from Google Drive. If they don't render in your browser, make sure the files are set to "Anyone with the link can view" in Google Drive sharing settings.
ai-research-agent/
β
βββ README.md # This file
βββ LICENSE # MIT License
βββ .gitignore # Excludes secrets, venv, data
βββ .env.example # Template β copy to .env and fill in
βββ requirements.txt # Python dependencies
βββ setup.sh # One-command installer
β
βββ agent.py # Main Telegram bot
β βββ search_papers() # Queries arXiv API
β βββ extract_insights() # Calls LLM via LangChain
β βββ handle_research() # /research command handler
β βββ handle_daily() # /daily command handler
β βββ start_command() # /start + /help handler
β
βββ scheduler.py # Daily 7AM digest service
β βββ send_daily_digest() # Sends papers to TELEGRAM_CHAT_ID
β
βββ systemd/
βββ ai-research.service # Systemd unit for agent.py
βββ ai-research-scheduler.service # Systemd unit for scheduler.py
In scheduler.py, find and edit this line:
topic = "efficient large language models"Change it to any research topic you're interested in:
topic = "multimodal language models"
# or
topic = "reinforcement learning robotics"
# or
topic = "quantum machine learning"In scheduler.py, find:
if now.hour == 7 and now.minute == 0:Change 7 to any hour (0β23) in 24-hour format:
if now.hour == 9 and now.minute == 0: # 9:00 AM
if now.hour == 18 and now.minute == 0: # 6:00 PMIn agent.py, the search_papers() function accepts max_results:
def search_papers(topic: str, max_results: int = 4):Change the default or pass a different value when calling it.
By default, results are from cs.AI and cs.LG. Modify the query in search_papers():
query=f"cs.AI OR cs.LG AND ({topic})",Other useful categories:
cs.CVβ Computer Visioncs.CLβ Computation and Language (NLP)cs.ROβ Roboticsstat.MLβ Machine Learning (statistics)cs.NEβ Neural and Evolutionary Computing
# 1. Check if agent.py is running
systemctl --user status ai-research.service
# 2. Check logs for errors
journalctl --user -u ai-research.service -n 50 --no-pager
# 3. Make sure .env has the correct TELEGRAM_BOT_TOKEN
cat .envtelegram.error.Conflict: Conflict: terminated by other getUpdates request
This means two instances of agent.py are running at the same time.
# Kill all running instances
pkill -f "python agent.py"
# Restart via systemd
systemctl --user restart ai-research.servicetelegram.error.Unauthorized: Unauthorized
Your TELEGRAM_BOT_TOKEN is wrong or expired.
# Verify the token
cat .env | grep TELEGRAM_BOT_TOKENGo back to @BotFather on Telegram β /mybots β select your bot β API Token to get a fresh token.
# Check logs
journalctl --user -u ai-research.service -f --no-pager
# Verify the API key
cat .env | grep OPENROUTER_API_KEY- Make sure the key starts with
sk-or-v1- - Check your usage at openrouter.ai/activity
- Try a different free model in
.env(DEFAULT_MODEL=mistralai/mistral-7b-instruct:free)
# 1. Check scheduler is running
systemctl --user status ai-research-scheduler.service
# 2. Check TELEGRAM_CHAT_ID is set
cat .env | grep TELEGRAM_CHAT_ID
# 3. Manually trigger the digest to test
source venv/bin/activate
python -c "import asyncio; from scheduler import send_daily_digest; asyncio.run(send_daily_digest())"Enable linger so user services survive without a login session:
sudo loginctl enable-linger $USER
# Verify
loginctl show-user $USER | grep Linger
# Should show: Linger=yes# Follow both services in real time (split terminal or tmux)
journalctl --user -u ai-research.service -f --no-pager &
journalctl --user -u ai-research-scheduler.service -f --no-pagerThis project implements production-grade Linux practices out of the box:
| Practice | Implementation |
|---|---|
| Secret Management | .env.example for safe commits; .env excluded by .gitignore; chmod 600 .env; EnvironmentFile= in systemd (never exposed to process list) |
| Process Isolation | NoNewPrivileges=true β service cannot gain elevated privileges |
| Filesystem Protection | ProtectSystem=full β /usr, /boot, /etc are read-only to the service |
| Temp Directory Isolation | PrivateTmp=true β service gets its own /tmp, isolated from system |
| Write Scope Limiting | ReadWritePaths= restricts writes to the project directory only |
| Resilience | Restart=on-failure + RestartSec=15 β auto-recovers from crashes |
| Structured Logging | StandardOutput=journal β all logs are queryable via journalctl |
| User-Level Services | Runs in user slice, no sudo required, unaffected by system updates |
| Boot Persistence | loginctl enable-linger for headless/server boot startup |
| Token Conflict Prevention | Single systemd-managed instance + drop_pending_updates=True in bot |
Q: Does this cost anything? No. The OpenRouter free tier, arXiv API, and Telegram Bot API are all free with no credit card required.
Q: What hardware does this need? Very little. It runs fine on a 5th gen i3 with 8GB RAM. It uses ~50MB RAM at idle.
Q: Can I run this on a Raspberry Pi or VPS? Yes. Any Linux machine with Python 3.12+ works. A $5/month VPS (e.g. Hetzner, DigitalOcean) runs this perfectly.
Q: Does it work on Windows or macOS?
The Python agent (agent.py and scheduler.py) works cross-platform. The systemd setup is Linux-only. On macOS/Windows you can use nohup or a process manager like pm2.
Q: Can I search topics other than AI? Yes. The arXiv query can be changed to any category or keyword. See the Customization section.
Q: The bot is slow β it takes 30-60 seconds to respond. Is that normal?
Yes. The delay is intentional: arXiv requests a delay between API calls (delay_seconds=3.0), and the LLM inference on the cloud adds latency. The agent notifies you it's working while it processes.
Q: Can multiple users use the same bot? Yes β any Telegram user who has the bot link can send commands. If you want it private, check Telegram's privacy settings for bots.
Q: How do I update the project?
git pull origin main
source venv/bin/activate
pip install -r requirements.txt
systemctl --user restart ai-research.service ai-research-scheduler.serviceMIT β free to use, modify, and distribute. See LICENSE.
Built with β€οΈ using NanoClaw Β· Python Β· LangChain Β· arXiv Β· OpenRouter Β· Telegram Β· Docker
NanoClaw Docs Β· OpenRouter Models Β· arXiv API
Star β this repo if it helped you!