feat: add agent-based solution generation via Claude Agent SDK#104
Open
andylizf wants to merge 16 commits into
Open
feat: add agent-based solution generation via Claude Agent SDK#104andylizf wants to merge 16 commits into
andylizf wants to merge 16 commits into
Conversation
Add agent model support to the solution generation pipeline: - Detect -agent suffix models and store problem_dir in GenerationTask - Add --agent-timeout and --agent-cost-limit CLI arguments - Branch execute_task to call generate_agent_solution for agent models - Save .meta.json alongside generated .cpp solutions - Add import json for metadata serialization
- Copy problem dir to temp directory so agent doesn't pollute originals - Makes concurrent runs on same problem safe - Track token usage from streaming message_delta events (only reliable source when timeout kills run before ResultMessage arrives) - Clean up temp dir after extraction
… for agent eval Build dynamic agent prompts from problem config (time/memory limits, subtask counts, interactive vs standard). Write test_all.sh and run_interactive.sh into agent workdir. Embed small sample I/O directly in prompt. Add CLAUDE.md with solving strategy guidance.
Parity mode (--parity flag) strips all test data, helper scripts, checker, and interactor from the agent workspace — matching the Harbor adapter setup where agents must self-test via brute-force cross-validation (对拍). Changes: - agent_interface.py: parity-aware prompt, workspace setup, CLAUDE.md, _get_infra_git_hash(), and enriched build_metadata (timestamp, parity flag) - generate_solutions.py: --parity CLI argument - tests: parity prompt validation (standard + interactive) - docs: solutions repo separation plan (infra_git_hash in meta.json) - .gitignore: exclude .claude/ directory - pyproject.toml: add pytest dev dependency
These belong to the solutions repo separation effort, which is docs-only for now. Removed _get_infra_git_hash(), subprocess import, and the infra_git_hash/timestamp/parity fields from build_metadata().
…n doc Agent always runs without test data — no --parity flag needed. The solutions repo separation plan is not ready to commit.
Move all large string constants (prompt templates, shell scripts, CLAUDE.md content) out of agent_interface.py into a dedicated constants module.
Prompt (initial message) is now lean — only problem-specific info (path, type, limits). CLAUDE.md carries persistent guidance that survives context compaction: self-testing methodology, workflow steps, common mistakes, retreat strategy.
andylizf
added a commit
to YanagiOrigami/harbor
that referenced
this pull request
Apr 24, 2026
- adapter.py + template/solution/solve.sh: solve.sh body lives in the template; _write_solution just copies it (per @Rebabit "can this part use the template?"). - adapter.py + template/environment/docker-compose.yaml: the static YAML body is in the template with {main_volumes}/{judge_source}/ {judge_volumes} placeholders; _render_environment_compose only computes the per-task substitutions (per @Rebabit "use template file directly wherever possible"). - README: replace the manual "git clone Frontier-CS" step with the auto-clone behavior of `run_adapter.py --source <git-url>` and point readers at FrontierCS/Frontier-CS#104 (branch feat/agent-eval-algorithmic) for the agent-mode generator the parity numbers reproduce; add a "Reproducing the Parity Numbers" section with side-by-side commands for the original-side and harbor-side runs. - utils.py: parse_time_limit / parse_memory_limit accept str|int|float and stringify before re.match, so a config.yaml with bare-int values (e.g. `time: 2`) no longer raises TypeError (recurring claude/devin bot finding).
Slimshilin
added a commit
to harbor-framework/harbor
that referenced
this pull request
Apr 26, 2026
* FrontierCS-Adapter * readme upd * Frontier-CS Adapter * readme upd * readme upd * Readme upd * ruff format * aligned agent environment * docker budget upd * prompt upd * upd * claude token limit setting. * Parity Experiments * Links Upd * builder name upd * Minor Fix * minor fix * final upd * docker published * address review from @crystalxyz - .gitignore: drop log-file patterns added to repo root - template/task.toml: add [task] block (name/authors/keywords) and rename `version` -> `schema_version` to match TaskConfig schema - adapter.py: substitute {problem_id} into task.toml template before model_validate_toml so PackageInfo.name passes ORG_NAME_PATTERN - parity_experiment.json: remove oracle entries; the README parity table + "On Oracle Score < 100%" section already document them - README.md: move Citing Us to the end and add Authors & Contributions section, matching the aider_polyglot adapter layout * bump task.toml schema_version to 1.1 to match TaskConfig default * address Devin review findings - evaluate.py: read MAX_POLL_TIME from env var (default 600), and adapter.py sets it per-task to verifier_timeout - 30 so the poll budget tracks the computed container timeout (e.g. 730s on problems with 70 cases) instead of the old 600s ceiling. This resolves the case where a valid submission could silently score 0.0 when a longer problem's judge cycle exceeded the hard-coded 600s poll cap. - adapter.py --skip-interactive: match on "!= interactive" rather than "== default" so the flag name matches its semantics and future non-default, non-interactive problem types are preserved. * parity fix * task template fix * parity: drop oracle entries, switch error bars to SEM, sync README Addresses @crystalxyz review follow-up: 1. parity_experiment.json: removes the 10 oracle entries that had re-appeared (they duplicate the README's per-task oracle column; JSON is meant to record agent parity data only). 2. parity_experiment.json + README: recomputes every "mean +/- X" as the sample standard error of the mean (SEM = sample_std / sqrt(n), n = 3), replacing the population std used previously. This matches the convention the team is standardising on for the Frontier-CS paper. 3. README: updates the parity table with the new SEM values, revises the Note section to describe the 0-counting policy and SEM convention, and replaces the now-stale Problem-2-specific callout with a generic note about variance from token-limit truncations. Oracle scores stay in the README's per-task table; they're presented as a column there rather than as separate parity entries. * chore: update parity_summary.csv [skip ci] * address review from @Rebabit and @claude bot - adapter.py + template/solution/solve.sh: solve.sh body lives in the template; _write_solution just copies it (per @Rebabit "can this part use the template?"). - adapter.py + template/environment/docker-compose.yaml: the static YAML body is in the template with {main_volumes}/{judge_source}/ {judge_volumes} placeholders; _render_environment_compose only computes the per-task substitutions (per @Rebabit "use template file directly wherever possible"). - README: replace the manual "git clone Frontier-CS" step with the auto-clone behavior of `run_adapter.py --source <git-url>` and point readers at FrontierCS/Frontier-CS#104 (branch feat/agent-eval-algorithmic) for the agent-mode generator the parity numbers reproduce; add a "Reproducing the Parity Numbers" section with side-by-side commands for the original-side and harbor-side runs. - utils.py: parse_time_limit / parse_memory_limit accept str|int|float and stringify before re.match, so a config.yaml with bare-int values (e.g. `time: 2`) no longer raises TypeError (recurring claude/devin bot finding). * lint: ruff format adapter.py * README parity table follows the standard adapter format Pre-empt @Slimshilin's review: - README Parity Experiments: lead with the single-row aggregate table in the standard format spec'd by adapters-human.mdx and used by aider_polyglot / algotune (Agent | Model | Metric | Number of Runs | Dataset Size | Original (mean ± SEM) | Harbor (mean ± SEM)). Aggregate is computed by treating each problem's per-side mean as one observation (n = 10) and reporting sample SEM across problems. Per-problem detail table demoted to a "### Per-Problem Detail" subsection but retained for granularity. - pyproject.toml: rename package "frontier-cs-adapter" -> "harbor-frontier-cs-algorithm-adapter" to match the "harbor-<folder>-adapter" convention; uv.lock regenerated. * adapter: migrate to src/ layout, rewrite README per spec, polish metadata Address @Slimshilin's review: the bot warnings/minors are merge blockers. Layout migration (per docs/content/docs/datasets/adapters.mdx "Adapter code directory" + the harbor adapter init scaffold): - adapter.py / utils.py / agent_constants.py / __init__.py moved into src/frontier_cs_algorithm/. - template/ moved into src/frontier_cs_algorithm/task-template/. - run_adapter.py replaced by src/frontier_cs_algorithm/main.py with the spec-required flags --output-dir / --limit / --overwrite / --task-ids (the adapter-specific --source / --skip-interactive / --docker-image / etc. flags are preserved). - pyproject.toml uses uv_build, exposes the `frontier-cs-algorithm` console script, and packages src/frontier_cs_algorithm. - adapter.py drops the direct-execution import fallback now that the adapter only loads as a package. README rewritten to follow the canonical src/harbor/cli/template-adapter/README.md sections in order, no added or renamed top-level sections. Per-problem table moved into "Notes & Caveats"; aggregate parity row stays in "Comparison with Original Benchmark (Parity)" with reproduction commands for both sides. run_frontier-cs-algorithm.yaml added: oracle agent default, Anthropic key + FRONTIER_CS_ALGORITHMIC_PATH passed through. adapter_metadata.json polish: - split renamed "per_problem_parity" -> "full" to match spec wording. - added_agents / parity_unmatching_agents now use ["None"] instead of null/[]. - parity_costs is now a string ("Not separately tracked..."); team can refine. - Drop the "173 reference.cpp submissions" mismatch (172 problems, 172-attempt sweep); notes describe the full-set sweep without the off-by-one number. Smoke test: `uv run python -m frontier_cs_algorithm.main` regenerates all 172 tasks; ruff format + lint clean. * authors: collapse to single Frontier-CS Team contact Per @Joyemang's direction: replace the 5 individual authors in task.toml with a single { name = "Frontier-CS Team", email = "frontier-cs@berkeley.edu" } entry. The full original-paper author list still appears in the README citation; the [task] block now carries a stable team contact for downstream registry/automation use, which also resolves the long-standing claude-bot finding about missing email fields. * naming: align task name and dataset dir to <adapter-id>-<problem> Per @Joyemang's "filename and format mismatch" comment: task names and dataset directories were inconsistent with the adapter id and with the convention used by every recently merged adapter (algotune, aider_polyglot, aa-lcr, ace-bench). Compared to algotune as the cleanest precedent: adapter id: algotune frontier-cs-algorithm top dir: datasets/algotune/ datasets/frontier-cs-algorithm/ task dir: algotune-<problem> frontier-cs-algorithm-<id> task.name: (matches dir, no separate org) frontier-cs/frontier-cs-algorithm-<id> This commit fixes three coupled issues at once: - task.toml template `[task].name`: drop the spurious "-ic" and the double-underscore separator -> `frontier-cs/frontier-cs-algorithm-{problem_id}`. - adapter.py task_dir: include the full adapter id -> `frontier-cs-algorithm-{problem.problem_id}` (was `frontier-cs-{...}`). - README references updated to the new path layout. Dataset-side rename (dir tree under harbor-datasets) lands in the follow-up commit on harbor-datasets#205. * adapter polish: pull runtime args into __init__, drop dead has_reference, fix metadata enums Address claude bot's latest /review-adapter findings (round 8): - FrontierCSAdapter now follows the tutorial convention: runtime settings (limit / overwrite / task_ids / skip_interactive) live on the constructor, and the entry point is `run() -> list[Path]`. main.py constructs once and calls adapter.run() (was adapter.prepare_tasks(...)). - Drop the dead FrontierCSProblem.has_reference field. _write_solution was already rechecking the reference.cpp file directly, so removing the field has no behavior change. - adapter_metadata.json: align with the harbor adapter init scaffold — added_agents goes from ["None"] to []; parity_unmatching_agents goes from ["None"] to null. Bot was correct that the literal string "None" inside an array doesn't match the template. (The bot's lingering "173 reference.cpp submissions" callout was already removed in commit d334b68; the adapter_metadata.json on this branch only mentions "every problem with a shipped reference.cpp", no off-by-one number.) Smoke test: regen of problem 0 against the new layout produces a byte-identical task directory to harbor-datasets HEAD. * main: wrap source-resolution in try/finally so tmp_dir always cleans up Address Devin Review's new finding on commit 5906811: when --source is a Git URL, the temp clone created by tempfile.mkdtemp() was only cleaned up on the success path. Any exception in between (failing git clone, docker build, or adapter.run()) would leave the temp directory and the shallow clone behind. Wrapping the whole post-clone block in try/finally — with shutil.rmtree(..., ignore_errors=True) in the finally — guarantees cleanup regardless of failure mode. * parity_experiment.json: consolidate to 1 entry x 10 metrics Address bot's "parity_benchmark_size: 1 per entry vs 10 in adapter_metadata" finding by following the same structure as adapters/aider_polyglot and adapters/algotune: a single top-level entry per (agent x model) experiment, with parity_benchmark_size matching the total number of tasks evaluated, and one entry inside the metrics array per task. Top-level fields stay (adapter_name, agent, model, date, parity_benchmark_size = 10, number_of_runs = 3, repo links). The notes field carries the parity-policy summary that previously lived on each per-problem entry (n=3 with token-limit-zero padding, sample SEM convention, subset-selection rationale). Per-problem run arrays and computed mean +/- SEM live as the 10 entries inside metrics. This also collapses parity_benchmark_size to a single value (10) that matches adapter_metadata.json's parity_benchmark_size, removing the cosmetic inconsistency the bot flagged. * chore: update parity_summary.csv [skip ci] * chore: update parity_summary.csv [skip ci] --------- Co-authored-by: Slimshilin <slimshilin2004@gmail.com> Co-authored-by: Andy Lee <andylizf@outlook.com> Co-authored-by: Crystal Zhou <45134936+crystalxyz@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
- generate_solutions.py: detect all-agent runs and skip judge availability check; read statement from local file instead of judge API in agent mode; pass api_key through to generate_agent_solution for key pool rotation - agent_interface.py: add api_key parameter to run_agent/generate_agent_solution, forwarded to SDK subprocess env for per-run key rotation - api_keys.py: only count API-level errors (rate limit, 5xx, auth) toward backoff; application-level failures (agent timeout, no solution) no longer penalize the key
…apter - DEFAULT_TIMEOUT_SECONDS: 3600 → 10800 to match Harbor task.toml - CLI --agent-timeout default: 3600 → 10800 - PARITY_TAIL / FULL_ACCESS_TAIL: drop stale article "the" before CLAUDE.md to match Harbor's "Read AGENT.md" phrasing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
-agentsuffix (e.g.,claude-sonnet-4-5-agent)generate_solutions.py— same CLI, just pass an agent model nameFiles
src/frontier_cs/gen/agent_interface.py— core agent lifecycle: prompt construction, SDK invocation, streaming, transcript logging, timeout/cost control, solution extractionsrc/frontier_cs/gen/agent_constants.py— prompt templates, helper shell scripts, CLAUDE.md contentsrc/frontier_cs/models.py—-agentmodel suffix handling in prefix/provider detectionalgorithmic/scripts/generate_solutions.py— agent mode integrationtests/test_agent_interface.py— 18 testsTest plan
pytest tests/test_agent_interface.py— 18/18 pass