A project-aware AI development agent built on Pi. Scans your codebase, learns your conventions, and helps you build, test, review, and ship — with persistent memory across sessions.
- Smart project onboarding: Scans your repos, detects tech stack, testing patterns, linting, CI, branch conventions — and configures itself automatically
- Persistent memory system: Maintains project knowledge across sessions (directives, architecture, per-repo learnings)
- Multi-repo awareness: Works across multiple repositories with understanding of their relationships
- Jira integration: Fetch tickets and sprint data directly from Jira Cloud
- Parallel code review: Multi-model AI review that runs multiple models in parallel (Claude, GPT, Gemini, etc.) and consolidates findings by consensus — 30-60s vs minutes with traditional reviews
- Smart PR descriptions: Generates PR descriptions from your branch diff using your repo's template
- QA guide generation: Builds step-by-step QA testing guides from Jira tickets and PRs
- Work recaps: Summarizes recent sessions — tickets, PRs, decisions — for standup prep
- Flaky test diagnosis: Reproduces, analyzes, fixes, and verifies intermittent test failures across any framework
- Interactive rebase: PR-aware rebase with conflict resolution guidance and force push safety
- Browser automation: Automated browser-based QA verification using Playwright
- Pi coding agent — Install from github.com/pi-mono/coding-agent
- AI provider account — Anthropic (Claude), OpenAI (GPT), or Google (Gemini)
- GitHub CLI (optional) — Install from cli.github.com for PR/review features
- Playwright (optional) — For browser-based QA. Run
/browser-setupinside the agent to install
git clone <this-repo-url> my-project-agent
cd my-project-agentcd .pi/extensions/wyebot
npm install
cd ../../..This installs the yaml package required by the wyebot extension.
Create .pi/local.json (gitignored):
{
"reposPath": "/path/to/your/repos"
}chmod +x wyebot.sh
./wyebot.shThen inside the agent:
/setup
The setup wizard walks you through:
- Choose AI provider — Anthropic, OpenAI, or Google
- Select model — Pick from available models
- Connect services — Optionally connect Jira and/or GitHub
- Onboard your project — Scans your repos and configures everything
/ticket PROJ-123 # Work on a Jira ticket
/ticket Fix the login bug # Work from a description
/pr-desc my-app # Generate a PR description
/parallel-review # Multi-model code review
/parallel-review-lite # Quick review (3 models max)
| Command | Description |
|---|---|
/help |
Show all available commands grouped by category |
/setup |
Guided first-time setup wizard |
/onboard |
Scan repos, detect conventions, generate config and memory |
/ticket [ID or desc] |
Work on a ticket — plan, implement, test, QA |
/pr-desc [repo] |
Generate PR description from branch diff |
/learn [repo] |
Review recent changes and update memory |
/recap |
Summarize recent work sessions |
/flaky-test [test path] |
Diagnose and fix intermittent test failures |
/rebase |
PR-aware interactive rebase |
/parallel-review [repo|PR] |
Multi-model parallel review (all configured models) |
/parallel-review-lite [repo|PR] |
Quick parallel review (max 3 models, faster) |
/qa-guide |
Generate QA testing guide from ticket/PR |
/browser-setup |
Install Playwright for browser QA |
/browser-reset |
Reset browser session |
/memory |
Show memory files status |
/change-provider |
Switch AI provider and model |
/jira-login |
Configure Jira credentials |
/github-login |
Setup GitHub CLI authentication |
Detailed flows for the more complex commands:
Purpose: Full development workflow from ticket analysis to implementation.
Flow:
- Fetch ticket — Retrieves ticket from Jira (or uses provided description)
- Analyze requirements — Breaks down acceptance criteria and technical requirements
- Load context — Loads relevant repo memory files based on what needs to change
- Plan implementation — Creates a multi-repo implementation plan with step-by-step tasks
- Confirm plan — Presents plan for review (behavior depends on
agent.autonomysetting) - Implement changes — Makes code changes across affected repos
- Add/update tests — Creates or modifies tests to cover new functionality
- Run tests — Executes test suite if
agent.execution.run_tests: true - Run linter — Auto-fixes code style if
agent.execution.run_linter: true - Generate QA guide — Creates step-by-step manual testing guide with preconditions, test steps, expected results, and edge cases
- Browser verification — Executes automated QA checks in browser if Playwright is available
- Update memory — Records new patterns and learnings in memory files
- Create PR — Optionally creates PR if
agent.git.create_pr: true
Example:
/ticket PROJ-123
/ticket Add password reset functionality to user settings
Purpose: Multi-model parallel code review — runs multiple AI models simultaneously and consolidates findings by consensus.
How it works:
- Dynamic model selection — Automatically uses all configured AI providers (Claude, GPT, Gemini, xAI, etc.)
- Parallel execution — All models review the same diff independently (~300ms stagger to avoid conflicts)
- Single-shot analysis — Each model receives the full diff and responds immediately (no tool calls), making reviews fast (15-45s per model)
- Consensus ranking — Findings are grouped by similarity and ranked by
consensusScore = agents_count × severity_weight - Real-time progress — Shows live status as each agent completes
Flow:
- Interactive picker — Choose what to review:
- Current branch vs base (master/main)
- A specific PR (by number, URL, Jira ticket ID, or branch name)
- Skip picker by providing:
/parallel-review https://github.com/org/repo/pull/123
- Fetch diff — Retrieves the complete changeset
- Launch parallel agents — Spawns one agent per configured AI model:
- Up to 3 Claude models (Opus, Sonnet, Haiku)
- 1 agent per other provider (GPT, Gemini, xAI, etc.)
- Each reviews for: bugs, security, performance, style, best practices
- Consolidate findings — Groups similar issues by file + line + description overlap
- Rank by consensus — Issues found by multiple models rank higher than single-model findings
- Generate report — Markdown report with:
- Findings grouped by severity (🔴 Critical, 🟡 Warnings, 🟢 Suggestions)
- Consensus tags showing
[3/4 agents]for each finding - Per-agent scores (1-10) and finding counts
- Combined summary from all agents
Example output:
### 🔴 Critical Issues — 2
**[3/4 agents]** `app/controllers/orders_controller.rb:45` — **Missing authorization**
No authorization check before accessing sensitive order data.
> 💡 Add authorization check: `authorize! :manage, @order`
### Scores
| Agent | Score | Findings |
|--------------------|-------|----------|
| claude-opus-4-6 | 7/10 | 8 |
| gemini-2.5-pro | 8/10 | 5 |
| gpt-5.1-codex | 6/10 | 11 |Variants:
/parallel-review— Full review with all configured models (can be 5+ models)/parallel-review-lite— Quick review with max 3 models (faster, cheaper)
Commands:
/parallel-review # Interactive picker
/parallel-review my-backend # Jump to PR picker in that repo
/parallel-review 42 # Review PR #42 (asks which repo)
/parallel-review https://github.com/…/42 # Direct URL, skip all pickers
/parallel-review PROJ-123 # Find PR by Jira ticket ID
/parallel-review-stop # Cancel a running reviewPerformance:
- Time per agent: 15-45s (single API call with embedded diff)
- Total time: ~30-60s (all models run in parallel)
- Diff size limit: 40k chars (auto-truncates larger diffs)
- Timeout: 2 minutes per agent
Tip: Use /parallel-review-lite for quick checks during development. Use /parallel-review for final pre-merge review.
Purpose: One-time project analysis and configuration generation.
Flow:
- Scan project structure — Finds all repos in configured location
- Detect tech stack — Identifies languages, frameworks, databases from config files:
- Package managers (package.json, Gemfile, requirements.txt, go.mod, Cargo.toml, etc.)
- Framework files (config/application.rb, mix.exs, tsconfig.json, etc.)
- Database configs (schema.rb, migrations, Prisma schema, etc.)
- Identify testing patterns — Finds test framework and conventions:
- Test file locations and naming patterns
- Factory/fixture patterns
- Test commands from scripts or CI config
- Find linting setup — Detects linters and formatters from config files
- Analyze git conventions — Samples recent commits for message patterns and branch naming
- Detect CI/CD — Reads GitHub Actions, GitLab CI, CircleCI configs
- Scan domain model — Parses primary models and relationships
- Generate project.yml — Creates configuration with detected conventions
- Generate memory files — Creates DIRECTIVES.md, ARCHITECTURE.md, and per-repo files with initial knowledge
When to run:
- First time setting up wyebot
- After major architectural changes
- When adding new repos to the project
- To refresh stale memory files
Purpose: Systematic diagnosis and fix of intermittent test failures.
Flow:
- Reproduce flakiness — Runs the test 10-50 times to confirm intermittent behavior
- Collect failure patterns — Records which runs fail and captures error messages
- Analyze root cause — Examines common flaky test causes:
- Race conditions and timing issues
- Non-deterministic data (random values, timestamps)
- Shared state between tests
- External dependencies (network, filesystem)
- Test order dependencies
- Propose fix — Suggests one or more solutions based on diagnosis
- Implement fix — Applies the chosen solution to the test
- Verify stability — Runs the test many times to confirm flakiness is eliminated
- Document pattern — Updates memory with the flaky pattern and fix for future reference
Example:
/flaky-test spec/models/user_spec.rb
/flaky-test tests/integration/checkout.test.ts
Purpose: Safe interactive rebase with PR awareness and conflict guidance.
Flow:
- Detect situation — Determines:
- Current branch and associated PR
- Base branch (main/master)
- How many commits ahead/behind
- Whether conflicts are expected
- Show PR status — Displays PR checks, reviews, and merge blockers
- Confirm rebase — Asks for confirmation before starting (shows what will happen)
- Start rebase — Executes
git rebase main(or configured base branch) - Guide conflict resolution — If conflicts occur:
- Shows conflicting files
- Explains the conflict context
- Suggests resolution strategy
- Can apply fixes if approved
- Continue rebase — Resumes after conflicts are resolved
- Verify result — Runs tests if configured to ensure rebase didn't break anything
- Force push guidance — Reminds about force-push and checks for coauthor coordination
Safety features:
- Never force-pushes automatically
- Checks for PR co-authors before suggesting force-push
- Offers abort option at any conflict
- Validates working directory is clean before starting
Purpose: Generate comprehensive manual testing guide from requirements and code changes.
Flow:
- Fetch ticket — Gets acceptance criteria and description from Jira
- Find associated PR — Locates PR linked to the ticket (via branch name or ticket key in PR title)
- Analyze code changes — Reviews the PR diff to understand:
- What features were added
- What flows were modified
- What edge cases exist in the code
- Extract test scenarios — Identifies:
- Happy path scenarios from acceptance criteria
- Edge cases from code logic (validations, error handling)
- Affected user flows
- Generate test plan — Creates structured guide with:
- Preconditions: Setup steps and test data needed
- Test steps: Numbered step-by-step instructions
- Expected results: What should happen at each step
- Edge cases: Boundary conditions and error scenarios
- Regression checks: Related features that might be affected
- Format for QA — Outputs markdown or Notion-friendly format
Example:
/qa-guide PROJ-456
/qa-guide https://github.com/org/repo/pull/123
Purpose: Review recent code changes and update memory with new patterns and conventions.
Flow:
- Determine context — Checks if on a feature branch or main/master:
- Feature branch: Analyzes your uncommitted and committed changes (your diff)
- Main/master: Reviews recent commits (after
git fetchorgit pull)
- Load current memory — Reads existing memory files for the repo
- Analyze changes — Reviews commits/diff for:
- New patterns or conventions
- Architectural decisions
- Code organization changes
- New dependencies or integrations
- Testing patterns
- Bug fixes and gotchas
- Extract learnings — Identifies what's worth remembering:
- New conventions (naming, structure, patterns)
- Technical decisions and rationale
- Common pitfalls discovered
- Integration details
- Update memory — Modifies repo memory file:
- Updates existing topics in-place if they exist
- Adds new topics only when necessary
- Keeps Quick Reference section up to date
- Summarize changes — Shows what was learned and added to memory
When to use:
- After completing a major feature
- After team standup to learn from others' commits
- Before starting new work to refresh context
- To capture patterns from code review feedback
After running /onboard, your project configuration lives in project.yml:
project:
name: "My App"
description: "E-commerce platform with microservices architecture"
repos:
- name: my-backend
path: ./my-backend
type: primary
stack: rails # or: phoenix, express, fastapi, go, rust, etc.
- name: my-frontend
path: ./my-frontend
type: service
stack: react # or: vue, svelte, angular, etc.
conventions:
branch_format: "ticket-number/description"
linter: "rubocop -A" # or: "npx eslint --fix .", "mix format", "cargo fmt", etc.
test_command: "bundle exec rspec" # or: "npx jest", "pytest", "go test ./...", "mix test", etc.
pr_template: ".github/pull_request_template.md"
merge_strategy: squash
agent:
autonomy: mixed # planning: confirmatory | autonomous | mixed
git:
create_branches: true # create and switch branches
commit: false # git commit
push: false # git push
create_pr: false # create PRs via gh
execution:
run_tests: true # run test suite
run_linter: true # run linter with auto-fix
install_dependencies: false # npm install, bundle install, pip install, etc.
run_migrations: false # db:migrate, alembic upgrade, prisma migrate, etc.
services:
comment_on_prs: false # leave comments on GitHub PRs
update_jira: false # modify Jira tickets
guardrails: [] # additional free-text rules
protected_files: [] # files the agent must never modify
jira:
board_id: 42
ticket_prefixes: ["PROJ", "BACK", "FRONT"]
exclude_prefixes: ["OPS"]This file is auto-generated by /onboard but fully editable. The agent reads it for:
- Which repos exist and where they are
- What commands to run for tests and linting
- Branch naming and commit conventions
- Autonomy flags — what the agent can and cannot do (git, execution, services)
- Jira board configuration
The agent maintains persistent knowledge in memory/:
memory/
├── DIRECTIVES.md ← Project rules, conventions, coding standards
├── ARCHITECTURE.md ← System architecture, domain model, patterns
└── repos/
├── my-backend.md ← Per-repo knowledge (generated by /onboard)
├── my-frontend.md
└── ...
- DIRECTIVES.md and ARCHITECTURE.md are auto-injected into the agent's context at every turn.
- Per-repo files are loaded on-demand when the agent determines which repos are affected.
- Both files start with a Quick Reference section for rapid orientation.
- The agent updates memory after completing work — patterns, conventions, and gotchas accumulate over time.
"Learned Patterns" and "Discovered Patterns" sections use topic headings (e.g., ### Authentication, ### Testing Patterns). The agent updates topics in-place instead of appending duplicates, keeping files concise.
The search_memory tool searches across all memory files for a keyword. Useful for finding how other repos handle similar problems.
Create a new skill in .pi/skills/<skill-name>/SKILL.md:
---
name: my-skill
description: What this skill does
---
# My Custom Skill
Instructions for the agent when this skill is invoked...Then use it with /skill:my-skill or register a command shortcut in the extension.
- Guardrails: Edit
agent.guardrailsinproject.yml - Conventions: Edit
conventionsinproject.yml - Memory: Edit files in
memory/directly — the agent will respect your changes - System prompt: Edit
.pi/AGENTS.mdfor fundamental behavior changes
Edit .pi/extensions/wyebot/index.ts to register new commands via pi.registerCommand().
Credentials are stored locally at ~/.pi/agent/ with restrictive permissions (0600):
jira-auth.json— Jira credentials
These are not committed to the repo. Each team member configures their own.
MIT