A skill-based agent framework with Anthropic-style progressive skill loading and built-in base tools for file and shell operations.
- Skills live in folders under
skills/with aSKILL.mdfrontmatter. - Only lightweight metadata is loaded at startup (Level 1).
- Full skill instructions are injected only after a
use_skillcall (Level 2). - Supporting resources (Level 3) can be loaded on demand.
- The agent always has base tools for files/directories and bash execution.
+-----------------------------+
| BaseAgent |
| - prompt assembly (L1/L2) |
| - use_skill handler |
| - tool routing |
+-------------+---------------+
|
| tool calls / messages
v
+----------------------+ +----------------------+
| LLMClient | | SkillLoader |
| - OpenAI-compatible | | - L1 metadata |
| - tool calling | | - L2 SKILL.md |
+----------+-----------+ | - L3 resources |
| +----------------------+
v
+----------------------+
| BaseToolExecutor |
| - read/write/list |
| - create_directory |
| - execute_bash |
+----------------------+
- Builds the system prompt with Level 1 skill metadata and activated Level 2 skill content
- Exposes a
use_skilltool so the model can request skill activation - Routes tool calls to the base tool executor
- Discovers skill folders containing
SKILL.md - Loads YAML frontmatter for Level 1 metadata (name/description/license)
- Loads full
SKILL.mdcontent when activated (Level 2) - Can load supporting files on demand (Level 3)
- OpenAI-compatible client (works with OpenAI, DeepSeek, and similar APIs)
- Handles tool calling and returns tool call payloads to the agent
- Provides built-in tools for file I/O, directory management, and bash execution
- Restricts relative paths to the working directory
- Clone or download this repository.
- Install dependencies:
pip install -r requirements.txtCreate a .env file in the project root:
OPENAI_API_KEY=your_api_key_here
# or
DEEPSEEK_API_KEY=your_api_key_heremain.py selects the provider based on which key is present. OpenAI-compatible providers can be configured directly via LLMClient if you embed the system in your own application code.
Run the main script:
python main.pyThe system will:
- Discover available skills (Level 1 metadata only)
- Initialize the LLM client
- Start an interactive chat loop
- Activate skills when the model calls
use_skill
Example interaction:
You: Make a poster for a jazz festival
Assistant: ... (activates a relevant skill if needed)
Skills are stored as folders under skills/. Each folder must contain a SKILL.md file with YAML frontmatter:
skills/
my-skill/
SKILL.md
LICENSE.txt
resources/
scripts/
Example SKILL.md frontmatter:
---
name: my-skill
description: Brief description of what this skill does
license: Optional license or reference
---
# Skill Instructions
Detailed instructions for the model...The loader uses the frontmatter for Level 1 discovery. Everything after the frontmatter is treated as the full skill content (Level 2).
These tools are always available to the model:
read_filewrite_filelist_filescreate_directoryexecute_bash
The working directory defaults to the current directory and can be customized when constructing BaseAgent.
Add a new handler method in core/base_tools.py, register it in self.tools, and include it in get_tool_definitions() so the model can call it.
- Discover:
SkillLoader.discover_skills()reads frontmatter from eachSKILL.md(Level 1). - Prompt:
BaseAgentinjects the available skill list into the system prompt. - Activate: The model calls
use_skillwhen it needs a skill. - Load:
SkillLoader.activate_skill()loads fullSKILL.mdcontent (Level 2). - Execute: Tool calls are routed to
BaseToolExecutor.
skill_client_system/
├── core/
│ ├── base_agent.py
│ ├── base_tools.py
│ ├── llm_client.py
│ └── skill_loader.py
├── skills/
│ ├── canvas-design/
│ ├── frontend-design/
│ └── pptx/
├── main.py
├── requirements.txt
└── README.md
MIT License - feel free to use and modify for your projects.
Contributions welcome! Feel free to:
- Add new skills
- Improve existing components
- Extend base tools
- Enhance documentation
Scan the QR code to connect with me
