A CodeCrafters challenge project implementing an AI agent with tool calling capabilities, built for the "Build Your own Claude Code" Challenge.
- Parse and execute user prompts
- Make decisions about which tools to use
- Interact with the file system (read, write)
- Execute shell commands
- Fetch web content
The implementation uses the OpenRouter API with OpenAI's client library and follows the CodeCrafters challenge requirements.
- Interactive CLI: Command-line interface for interacting with the agent
- Tool Calling: Multiple tools for file operations, web fetching, and shell commands
read_file: Read contents of fileswrite_file: Write content to files (with safety checks)bash: Execute shell commandsweb_fetch: Retrieve content from URLs
- Skills:
puppeteer: Browser Automation & Web Scraping
- System Context Loading: Load personality and instructions from
system/andskills/directories - Error Handling: Graceful error recovery and user-friendly messages
- Structured Output: Rich terminal formatting for better UX
- Python 3.12 or higher
uvpackage manager (recommended) or pip- OpenRouter API key
# Install dependencies
uv sync
# Activate the virtual environment
source .venv/bin/activate # On Linux/Mac
# or
.venv\Scripts\activate # On Windowspip install -r requirements.txtCreate a .env file in the project root with your OpenRouter credentials:
LLM_API_KEY=your_api_key_here
LLM_BASE_URL=https://openrouter.ai/api/v1 # Optional, default is set
LLM_MODEL=deepseek/deepseek-v3.2Note: The LLM_API_KEY is required. You can obtain one from OpenRouter.
The preferred way to run the agent locally:
./run.sh --help
usage: python3 -m app.main [-h] -p PROMPT [--auto-approve] [--no-repl] [--max-iterations N] [--silent]
options:
-h, --help show this help message and exit
-p, --prompt PROMPT The initial prompt for the agent
--auto-approve Allow the agent to call tools without asking for permission
--no-repl Run the agent with the initial prompt and then exit without starting the REPL
--max-iterations N The maximum number of iterations the agent will run before stopping (default: 100)
--silent Suppress all output except the final response (implies --auto-approve --no-repl)./run.sh -p "Your prompt here"./run.sh --no-repl -p "List all Python files in the current directory"This script sets up the proper PYTHONPATH and environment, then runs uv run -m app.main.
uv run --project . --quiet -m app.main -p "Your prompt here"$ ./your_program.sh -p "Create a Python script that calculates Fibonacci numbers"The agent will:
- Analyze your request
- Decide which tools to use
- Create the file
- Report back with the results
# Run all tests
uv run pytest
# Run specific test file
uv run pytest tests/test_agent.py
# Run with verbose output
uv run pytest -v
# Run tests with coverage
uv run pytest --cov=app --cov-report=term-missing- Unit Tests: Test individual components and tools
- Integration Tests: Test end-to-end functionality in
tests/integration/ - Mocking: External API calls are mocked to avoid actual API usage
To add a new tool:
- Create a new file in
app/tools/(e.g.,my_tool.py) - Define the tool specification and implementation
- Register it in
app/tools/tool_calls.py(updaterun_tool()function) - Add tests if applicable
Example tool structure:
def my_tool_spec():
return {
"type": "function",
"function": {
"name": "my_tool",
"description": "Tool description",
"parameters": {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "Parameter description"
}
},
"required": ["param1"]
}
}
}
def my_tool(param1: str) -> str:
"""Tool implementation."""
try:
# Do something
return "Success message"
except Exception as e:
return f"Error: {str(e)}"- API Key Issues: Ensure your
.envfile containsLLM_API_KEY=your_key_here - Python Version: Ensure you have Python 3.12 or higher (
python --version) - Permission Errors: Make sure
run.shis executable (chmod +x run.sh) - Import Errors: Run from project root or use
./run.shscript
For debugging, you can add verbose output:
DEBUG=1 ./run.sh -p "Your prompt"Or run the Python module directly:
uv run --project . --quiet -m app.main -p "Your prompt" --max-iterations 5- Implement retry logic for API calls
- Add more useful tools and skills (todo tasks, mcp, etc.)
- Session persistence
- Add streaming responses for better UX
- Add support for additional LLM providers
- Implement conversation history
- Add file watching capabilities
- Create GUI/web interface
# Clone the repository
git clone <repository-url>
cd codecrafters-claude-code
# Install with development dependencies
uv sync --dev
# Run tests
uv run pytest
# Format code
uv run ruff format app/ tests/
# Lint code
uv run ruff check app/ tests/This project is part of the CodeCrafters "Build Your own Claude Code" Challenge. See the CodeCrafters website for more information about their challenges and licensing.
- CodeCrafters for the challenge platform
- OpenRouter for LLM API access
- OpenAI for the client library interface
- All contributors and testers