Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 26 additions & 86 deletions content/docs/custom-evaluators.md
Original file line number Diff line number Diff line change
@@ -1,108 +1,48 @@
---
title: "Custom Evaluators"
title: Custom Evaluators
weight: 3
description: "Write your own scoring logic in Python, JavaScript, or any language."
description: Extend agentevals with your own evaluator logic in Python.
---

Beyond the built-in metrics, you can write your own evaluators in Python, JavaScript, or any language. An evaluator is any program that reads JSON from stdin and writes a score to stdout.
If the built-in rubric does not cover your use case, you can write custom evaluators in Python and run them alongside the default scoring pipeline.

> For the comprehensive guide, see [custom-evaluators.md](https://github.com/agentevals-dev/agentevals/blob/main/docs/custom-evaluators.md) in the repository.

## Scaffold an Evaluator
## Install the evaluator SDK

```bash
agentevals evaluator init my_evaluator
pip install agentevals-evaluator-sdk
```

This creates a directory with boilerplate and a manifest:

```
my_evaluator/
├── my_evaluator.py # your scoring logic
└── evaluator.yaml # metadata manifest
```
## Example

You can also list supported runtimes and generate config snippets:

```bash
agentevals evaluator runtimes # show supported languages
agentevals evaluator config my_evaluator \
--path ./evaluators/my_evaluator.py # generate config snippet
```

## Implement Scoring Logic

Your function receives an `EvalInput` with the agent's invocations and returns an `EvalResult` with a score between 0.0 and 1.0.
Create a file such as `my_eval.py`:

```python
from agentevals_evaluator_sdk import EvalInput, EvalResult, evaluator
from agentevals_evaluator import Evaluator, Score

@evaluator
def my_evaluator(input: EvalInput) -> EvalResult:
scores = []
for inv in input.invocations:
# Your scoring logic here
score = 1.0
scores.append(score)

return EvalResult(
score=sum(scores) / len(scores) if scores else 0.0,
per_invocation_scores=scores,
)
class PolitenessEvaluator(Evaluator):
name = "politeness"
description = "Checks whether the agent response is polite and professional"

if __name__ == "__main__":
my_evaluator.run()
def evaluate(self, trace) -> Score:
text = trace.output_text.lower()
passed = "please" in text or "thank you" in text
return Score(
value=1.0 if passed else 0.0,
reasoning="Response includes polite phrasing" if passed else "Response is missing polite phrasing",
)
```

Install the SDK standalone with `pip install agentevals-evaluator-sdk` (no heavy dependencies).

## Reference in Eval Config

```yaml
# eval_config.yaml
evaluators:
- name: tool_trajectory_avg_score
type: builtin

- name: my_evaluator
type: code
path: ./evaluators/my_evaluator.py
threshold: 0.7
```
Then run it with the CLI:

```bash
agentevals run trace.json --config eval_config.yaml --eval-set eval_set.json
```

## Community Evaluators

Community evaluators can be referenced directly from the shared [evaluators repository](https://github.com/agentevals-dev/evaluators) using `type: remote`:

```yaml
evaluators:
- name: response_quality
type: remote
source: github
ref: evaluators/response_quality/response_quality.py
threshold: 0.7
config:
min_response_length: 20
agentevals run \
--otlp-endpoint http://localhost:6006/v1/traces \
--evaluator my_eval.py:PolitenessEvaluator
```

Browse available community evaluators on the [Evaluators](/evaluators/) page, or contribute your own.

## Supported Languages

Evaluators can be written in any language that reads JSON from stdin and writes JSON to stdout.

| Language | Extension | SDK available |
|---|---|---|
| Python | `.py` | `pip install agentevals-evaluator-sdk` |
| JavaScript | `.js` | No SDK yet — just read stdin, write stdout |
| TypeScript | `.ts` | No SDK yet — just read stdin, write stdout |

## Further Reading
## Tips

- [Custom Evaluators Guide](https://github.com/agentevals-dev/agentevals/blob/main/docs/custom-evaluators.md) — Full protocol reference
- [Community Evaluators](/evaluators/) — Browse and submit evaluators
- [Eval Set Format](https://github.com/agentevals-dev/agentevals/blob/main/docs/eval-set-format.md) — Schema and field reference for eval set JSON files
- Keep evaluators deterministic when possible
- Return short, useful reasoning strings for debugging
- Start with a binary pass/fail score before adding more complex grading
48 changes: 17 additions & 31 deletions content/docs/faq.md
Original file line number Diff line number Diff line change
@@ -1,47 +1,33 @@
---
title: "FAQ"
title: FAQ
weight: 6
description: "Frequently asked questions about AgentEvals."
description: Common questions about how agentevals works and how to deploy it.
---

## How does this compare to ADK's evaluations?
## Does agentevals re-run my agent?

Unlike ADK's LocalEvalService, which couples agent execution with evaluation, agentevals only handles scoring: it takes pre-recorded traces and compares them against expected behavior using metrics like tool trajectory matching, response quality, and LLM-based judgments.
No. agentevals evaluates behavior from existing OpenTelemetry traces, so you can score what actually happened in production or staging without replaying requests.

However, if you're iterating on your agents locally, you can point your agents to agentevals and you will see rich runtime information in your browser. For more details, use the bundled wheel and explore the Local Development option in the UI.
## What do I need to get started?

## How does this compare to Bedrock AgentCore's evaluation?
You need:

AgentCore's evaluation integration (via `strands-agents-evals`) also couples agent execution with evaluation. It re-invokes the agent for each test case, converts the resulting OTel spans to AWS's ADOT format, and scores them against 4 built-in evaluators (Helpfulness, Accuracy, Harmfulness, Relevance) via a cloud API call. This means you need an AWS account, valid credentials, and network access for every evaluation.
- OpenTelemetry traces from your agent or workflow
- An evaluator model configured for scoring
- The CLI installed with `pip install agentevals-cli`

agentevals takes a different approach: it scores pre-recorded traces locally without re-running anything. It works with standard Jaeger JSON and OTLP formats from any framework, supports open-ended metrics (tool trajectory matching, LLM-based judges, custom scorers), and ships with a CLI and web UI. No cloud dependency required.
## Can I write my own evaluators?

## What trace formats are supported?
Yes. Install the SDK with `pip install agentevals-evaluator-sdk` and register your Python evaluator class with the CLI.

AgentEvals supports **OTLP** (OpenTelemetry Protocol) with `http/protobuf` and `http/json`, plus **Jaeger JSON** trace exports. Works with any OTel-instrumented framework including LangChain, Strands, Google ADK, and others.
## Where do results show up?

## Do I need to re-run my agent to evaluate it?
Results can be written back to your backend, exported in CI, or inspected in the agentevals UI.

No. Record once, score as many times as you want. AgentEvals evaluates from existing traces, so you never need to replay expensive LLM calls.
## Does this work with any tracing backend?

## What frameworks are supported?
It works anywhere you can access OpenTelemetry-compatible trace data or an OTLP endpoint that exposes the traces agentevals needs.

Any framework that emits OpenTelemetry spans works out of the box. This includes **LangChain**, **Strands**, **Google ADK**, and any other OTel-instrumented framework. The zero-code integration requires no SDK — just point your agent's OTel exporter to agentevals.
## Is there a web UI?

## Can I write custom evaluators?

Yes. Evaluators can be written in Python, JavaScript, or any language that reads JSON from stdin and writes JSON to stdout. See the [Custom Evaluators](/docs/custom-evaluators/) page for details.

A Python SDK is available (`pip install agentevals-evaluator-sdk`) for convenience, but it's not required.

## Can I use this in CI/CD?

Absolutely. The CLI is designed for CI integration. Use `--output json` for machine-readable results. See the [CLI & CI/CD section](/docs/integrations/#cli--cicd) for a GitHub Actions example.

## Is there a community evaluator registry?

Yes. Browse community-contributed evaluators on the [Evaluators](/evaluators/) page, or contribute your own to the [evaluators repository](https://github.com/agentevals-dev/evaluators).

## Is AgentEvals open source?

Yes. AgentEvals is open source and available on [GitHub](https://github.com/agentevals-dev/agentevals). Contributions are welcome!
Yes — see the [UI Walkthrough](/docs/ui-walkthrough/) for the current workflow and screenshots.
66 changes: 21 additions & 45 deletions content/docs/quick-start.md
Original file line number Diff line number Diff line change
@@ -1,66 +1,42 @@
---
title: "Quick Start"
title: Quick Start
weight: 1
description: "Get up and running with AgentEvals in under 5 minutes."
description: Install agentevals, point it at your traces, and run your first evaluation.
---

## Installation
agentevals scores AI agent behavior from existing OpenTelemetry traces — no re-runs required.

Grab a wheel from the [releases page](https://github.com/agentevals-dev/agentevals/releases). The **core** wheel has the CLI and REST API. The **bundle** wheel adds streaming and the embedded web UI.
## Install the CLI

```bash
pip install agentevals-<version>-py3-none-any.whl

# For live streaming support:
pip install "agentevals-<version>-py3-none-any.whl[live]"
```

**From source** with `uv` or Nix:

```bash
uv sync
# or: nix develop .
pip install agentevals-cli
```

See [DEVELOPMENT.md](https://github.com/agentevals-dev/agentevals/blob/main/DEVELOPMENT.md) for build instructions.

## CLI Quick Start
## Run your first evaluation

Run an evaluation against a sample trace:
Point the CLI at an OTLP endpoint and evaluate the traces it finds.

```bash
uv run agentevals run samples/helm.json \
--eval-set samples/eval_set_helm.json \
-m tool_trajectory_avg_score
agentevals run \
--otlp-endpoint http://localhost:6006/v1/traces \
--model openai/gpt-4o-mini
```

List available evaluators:
If your collector requires auth, add headers:

```bash
uv run agentevals evaluator list
agentevals run \
--otlp-endpoint https://collector.example.com/v1/traces \
--otlp-header "Authorization=Bearer <token>" \
--model openai/gpt-4o-mini
```

## Live UI Quick Start

Start the server with the embedded web UI:

```bash
agentevals serve
```

Open `http://localhost:8001` to upload traces and eval sets, select metrics, and view results with interactive span trees.

**From source** (two terminals):

```bash
uv run agentevals serve --dev # Terminal 1
cd ui && npm install && npm run dev # Terminal 2 → http://localhost:5173
```
## What happens under the hood

Live-streamed traces appear in the "Local Dev" tab, grouped by session ID.
agentevals reconstructs each traced agent interaction, sends the relevant context to an evaluator model, and writes back structured scores you can inspect in the UI or export in CI.

## What's Next
## Next steps

- [Integrations](/docs/integrations/) — Zero-code, SDK, and CLI/CI integration patterns
- [Custom Evaluators](/docs/custom-evaluators/) — Build your own evaluators
- [UI Walkthrough](/docs/ui-walkthrough/) — Deep dive into the web UI
- Learn how traces, models, and outputs are configured in [Advanced](/docs/advanced/)
- Add your own scoring logic in [Custom Evaluators](/docs/custom-evaluators/)
- View and compare runs in the [UI Walkthrough](/docs/ui-walkthrough/)
Loading