From 30e5f6d85b44c55d22b8b2771f731fa867536a23 Mon Sep 17 00:00:00 2001
From: aditya-mitra <55396651+aditya-mitra@users.noreply.github.com>
Date: Sun, 22 Feb 2026 16:28:04 +0530
Subject: [PATCH] fix typo scores via ui
---
pages/docs/evaluation/evaluation-methods/scores-via-ui.mdx | 4 ++--
pages/docs/observability/sdk/troubleshooting-and-faq.mdx | 6 +-----
2 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/pages/docs/evaluation/evaluation-methods/scores-via-ui.mdx b/pages/docs/evaluation/evaluation-methods/scores-via-ui.mdx
index 26a959875f..5320fd456e 100644
--- a/pages/docs/evaluation/evaluation-methods/scores-via-ui.mdx
+++ b/pages/docs/evaluation/evaluation-methods/scores-via-ui.mdx
@@ -15,7 +15,7 @@ Adding scores via the UI is a manual [evaluation method](/docs/evaluation/core-c
/>
-You can also use [Annotation Queues](docs/evaluation/evaluation-methods/annotation-queues) to streamline working through reviewing larger batches of of traces, sessions and observations.
+You can also use [Annotation Queues](docs/evaluation/evaluation-methods/annotation-queues) to streamline working through reviewing larger batches of traces, sessions and observations.
## Why manually adding scores via UI?
@@ -56,7 +56,7 @@ To see your newly added scores on traces or observations, **click on** the `Scor
## Add scores to experiments
-When running [experiments via UI](/docs/evaluation/experiments/experiments-via-ui) or via [SDK](/docs/evaluation/experiments/experiments-via-sdk), you can annotate results directly from the experiment compare view.
+When running [experiments via UI](/docs/evaluation/experiments/experiments-via-ui) or via [SDK](/docs/evaluation/experiments/experiments-via-sdk), you can annotate results directly from the experiment compare view.
diff --git a/pages/docs/observability/sdk/troubleshooting-and-faq.mdx b/pages/docs/observability/sdk/troubleshooting-and-faq.mdx
index 5a45c51c41..28707ce0c7 100644
--- a/pages/docs/observability/sdk/troubleshooting-and-faq.mdx
+++ b/pages/docs/observability/sdk/troubleshooting-and-faq.mdx
@@ -16,7 +16,7 @@ If you cannot find your issue below, try [Ask AI](/docs/ask-ai), open a [GitHub
## No traces appearing
- See [Missing traces](/faq/all/missing-traces) for common reasons and solutions.
-- Confirm `tracing_enabled` is `True` and `sample_rate` is not `0.0`.
+- Confirm `tracing_enabled` is `True` and [`sample_rate`](/docs/observability/features/sampling) is not `0.0`.
- Call `langfuse.shutdown()` (or `langfuse.flush()` in short-lived jobs) so queued data is exported.
- Enable debug logging (`debug=True` or `LANGFUSE_DEBUG="True"`) to inspect exporter output.
@@ -39,7 +39,3 @@ If you cannot find your issue below, try [Ask AI](/docs/ask-ai), open a [GitHub
## Missing traces with `@vercel/otel`
- Use the manual OpenTelemetry setup via `NodeSDK` and register the `LangfuseSpanProcessor`. The `@vercel/otel` helper does not yet support the OpenTelemetry JS SDK v2 that Langfuse depends on. See the [TypeScript instrumentation docs](/docs/observability/sdk/instrumentation#framework-third-party-telemetry) for a full example.
-
-
-
-