Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions pages/docs/evaluation/evaluation-methods/scores-via-ui.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ Adding scores via the UI is a manual [evaluation method](/docs/evaluation/core-c
/>

<Callout type="info">
You can also use [Annotation Queues](docs/evaluation/evaluation-methods/annotation-queues) to streamline working through reviewing larger batches of of traces, sessions and observations.
You can also use [Annotation Queues](docs/evaluation/evaluation-methods/annotation-queues) to streamline working through reviewing larger batches of traces, sessions and observations.
</Callout>

## Why manually adding scores via UI?
Expand Down Expand Up @@ -56,7 +56,7 @@ To see your newly added scores on traces or observations, **click on** the `Scor

## Add scores to experiments

When running [experiments via UI](/docs/evaluation/experiments/experiments-via-ui) or via [SDK](/docs/evaluation/experiments/experiments-via-sdk), you can annotate results directly from the experiment compare view.
When running [experiments via UI](/docs/evaluation/experiments/experiments-via-ui) or via [SDK](/docs/evaluation/experiments/experiments-via-sdk), you can annotate results directly from the experiment compare view.


<Callout type="info">
Expand Down
6 changes: 1 addition & 5 deletions pages/docs/observability/sdk/troubleshooting-and-faq.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ If you cannot find your issue below, try [Ask AI](/docs/ask-ai), open a [GitHub
## No traces appearing

- See [Missing traces](/faq/all/missing-traces) for common reasons and solutions.
- Confirm `tracing_enabled` is `True` and `sample_rate` is not `0.0`.
- Confirm `tracing_enabled` is `True` and [`sample_rate`](/docs/observability/features/sampling) is not `0.0`.
- Call `langfuse.shutdown()` (or `langfuse.flush()` in short-lived jobs) so queued data is exported.
- Enable debug logging (`debug=True` or `LANGFUSE_DEBUG="True"`) to inspect exporter output.

Expand All @@ -39,7 +39,3 @@ If you cannot find your issue below, try [Ask AI](/docs/ask-ai), open a [GitHub
## Missing traces with `@vercel/otel`

- Use the manual OpenTelemetry setup via `NodeSDK` and register the `LangfuseSpanProcessor`. The `@vercel/otel` helper does not yet support the OpenTelemetry JS SDK v2 that Langfuse depends on. See the [TypeScript instrumentation docs](/docs/observability/sdk/instrumentation#framework-third-party-telemetry) for a full example.