) {
+ // Generic virtualization logic - works with any tree structure
+ const flattenedItems = useMemo(
+ () => flattenTree(tree, collapsedNodes, 0, [], true),
+ [tree, collapsedNodes],
+ );
+ const rowVirtualizer = useVirtualizer({
+ /* ... */
+ });
+
+ return (
+
+ {rowVirtualizer.getVirtualItems().map((virtualRow) => {
+ const item = flattenedItems[virtualRow.index];
+ return renderNode({
+ node: item.node,
+ treeMetadata: {
+ depth: item.depth,
+ treeLines: item.treeLines,
+ isLastSibling: item.isLastSibling,
+ },
+ isSelected: item.node.id === selectedNodeId,
+ isCollapsed: collapsedNodes.has(item.node.id),
+ onToggleCollapse: () => onToggleCollapse(item.node.id),
+ onSelect: () => onSelectNode(item.node.id),
+ });
+ })}
+
+ );
+}
+```
+
+This three-tier structure within the presentation layer provides clear separation of concerns: orchestration components handle composition, domain components apply business rules, and pure display components offer maximum reusability.
+
+## Sidebar: Context Design
+
+Instead of a single monolithic context, we create focused contexts with clear boundaries. This allows us to isolate responsibility and avoid unnecessary re-renders. This way each component only subscribes to the contexts it needs - clicking a node doesn't re-render the preference panel, and toggling "show duration" doesn't re-render the tree.
+
+```typescript
+
+ {" "}
+ {/* User preferences: show duration, costs, etc. */}
+
+ {" "}
+ {/* Read-only data: trace, tree, nodeMap */}
+
+ {" "}
+ {/* UI state: selected node, collapsed nodes */}
+
+ {" "}
+ {/* Search query and results */}
+
+
+
+
+
+```
+
+**Why separate contexts?**
+
+Contexts are separated by change frequency and responsibility. Data changes rarely (only on refetch), selection changes constantly (every click), and preferences change occasionally (user toggles). Each context has clear ownership: TraceDataProvider owns data and derived structures, ViewPreferencesProvider owns display settings, and SelectionProvider owns interaction state.
+
+## Key Takeaways
+
+At Langfuse, our culture of high ownership and proactive engineering requires components that support rapid, confident changes by multiple developers. The layer separation approach we applied to the trace view addresses this need: separating concerns by responsibility and change frequency creates clear boundaries that enable engineers to navigate the codebase confidently and make changes without ripple effects.
+
+The new architecture has costs - more files and initial setup overhead. However, this increase in files can be managed through intentional directory organization, which we'll explore in [Part 2](/blog/2025-02-react-architecture-part-2) along with other code organization principles that build on these layer separations. The foundation this provides for maintainability and performance aligns with our engineering culture's requirements.
+
+_The complete trace view implementation:_
+
+- _[trace2/](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2) - Full feature_
+- _[api/](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2/api) - Data fetching layer_
+- _[lib/](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2/lib) - Pure transformation layer_
+- _[contexts/](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2/contexts) - Orchestration layer_
+
+---
+
+**Building Langfuse?** We're growing our engineering team. If you care about software architecture and maintainable code, [check out our open positions](https://langfuse.com/careers).
diff --git a/pages/blog/2025-02-react-architecture-part-2.mdx b/pages/blog/2025-02-react-architecture-part-2.mdx
new file mode 100644
index 0000000000..fa4513da61
--- /dev/null
+++ b/pages/blog/2025-02-react-architecture-part-2.mdx
@@ -0,0 +1,203 @@
+---
+title: "Production-Grade React Components Part 2: Co-Location and Pure Functions"
+date: 2025/02/12
+description: "Organizing code by feature instead of file type, and extracting business logic into testable pure functions."
+tag: engineering, react, architecture, testing
+author: Michael
+---
+
+At Langfuse, we believe in the power of engineers shipping with [extreme ownership](https://langfuse.com/handbook/how-we-work/principles). As we ship features and improvements iteratively, React components evolve. Occasionally, it makes sense to take a step back and refactor them to maintain velocity and code quality.
+
+In [Part 1](/blog/2025-02-react-architecture-part-1), we covered layer separation - organizing code into data fetching, pure transformation, context orchestration, and presentation layers. This separation improved maintainability but introduced more files: separate files for API hooks, pure transformation functions, context providers, and presentation components.
+
+With more files comes a new challenge: _where_ should these files live in your directory structure?
+
+## The Co-Location Principle
+
+The co-location principle can be summarized as: place code as close to where it's relevant as possible. Related code should live together, making it easier to find, understand, modify, and delete as a unit.
+
+The challenge is knowing how close is too close. Move things too far apart and you create friction. Move them too close and you lose clarity. Finding the right distance for each situation is key.
+
+### Feature-Level Organization
+
+Traditional folder structures organize by file type - separate folders for components, hooks, utils, and types. Understanding a feature requires opening files across multiple directories. Refactoring means modifying files scattered across folders. Deleting a feature often leaves orphaned code.
+
+We organize by feature instead. The trace view lives in a single `trace/` folder containing everything related to traces:
+
+```
+trace/
+├── api/ # Data fetching layer
+├── lib/ # Pure transformation layer
+├── contexts/ # Context orchestration layer
+├── components/ # Presentation layer
+└── config/ # Feature configuration
+```
+
+_([View actual structure](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2))_
+
+Deleting a feature means deleting one folder. Dependencies become clear through import paths - if `TraceLogView` imports from `TraceTimeline`, you see it in the path. Circular dependencies become obvious. Everything related lives together.
+
+### Component-Level Organization
+
+Within a feature, complex components get their own folders. The `TraceTimeline` component visualizes observation timing - it needs multiple sub-components, pure calculation functions, and tests. Files share a common prefix pattern that serves as a discovery mechanism - in your IDE, you can type the prefix to see all related files, or use grep/file search to find them programmatically:
+
+```
+TraceTimeline/
+├── TimelineIndex.tsx # Main component
+├── TimelineBar.tsx # Sub-components
+├── TimelineRow.tsx
+├── TimelineScale.tsx
+├── timeline-calculations.ts # Pure functions
+├── timeline-calculations.clienttest.ts # Tests co-located
+├── timeline-flattening.ts # More utilities
+└── types.ts # Type definitions
+```
+
+This naming convention provides practical benefits for both human developers and automated tools. In your IDE, typing "timeline" shows all related files instantly. For programmatic access, `grep -r "timeline-" .` or similar file search patterns find everything at once - useful when your agent needs to understand or modify a component. Tests live next to the code they verify, marked with `.clienttest.ts`, making coverage visible at a glance.
+
+### When NOT to Co-Locate
+
+Co-location is a means to an end, not an end in itself. The goal is making code easier to work with, not rigidly following a principle. Don't over-abstract by creating folder structures and file separations that add no practical value.
+
+Simple prop interfaces that live in one place stay in the component file - defining them separately adds navigation overhead without benefit. Start with things together. If they become too tightly coupled or create confusion, separate them. The structure should serve the work, not constrain it.
+
+## Co-Locating Pure Functions and Tests
+
+Co-location becomes particularly valuable when extracting business logic from React components. Pure functions can be tested without React setup, reused in different contexts, and co-located with both the components that use them and the tests that verify them.
+
+### Example: Timeline Calculations
+
+The timeline component needs to position bars, calculate widths, and select appropriate time intervals. Extracting this logic into pure functions enables testing without React, reuse in other contexts, and separation of calculation from rendering:
+
+```typescript
+// timeline-calculations.ts
+export const SCALE_WIDTH = 900;
+export const STEP_SIZE = 100;
+
+export const PREDEFINED_STEP_SIZES = [
+ 0.25, 0.5, 0.75, 1, 1.25, 1.5, 2, 2.5, 3, 4, 5, 6, 7, 8, 9, 10, 15, 20, 25,
+ 35, 40, 45, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500,
+];
+
+/**
+ * Calculate horizontal offset from trace start time
+ */
+export function calculateTimelineOffset(
+ nodeStartTime: Date,
+ traceStartTime: Date,
+ totalScaleSpan: number,
+ scaleWidth: number = SCALE_WIDTH,
+): number {
+ const timeFromStart =
+ (nodeStartTime.getTime() - traceStartTime.getTime()) / 1000;
+ return (timeFromStart / totalScaleSpan) * scaleWidth;
+}
+
+/**
+ * Calculate width of timeline bar from duration
+ */
+export function calculateTimelineWidth(
+ duration: number,
+ totalScaleSpan: number,
+ scaleWidth: number = SCALE_WIDTH,
+): number {
+ return (duration / totalScaleSpan) * scaleWidth;
+}
+
+/**
+ * Calculate appropriate step size for time axis
+ */
+export function calculateStepSize(
+ traceDuration: number,
+ scaleWidth: number = SCALE_WIDTH,
+): number {
+ const calculatedStepSize = traceDuration / (scaleWidth / STEP_SIZE);
+ return (
+ PREDEFINED_STEP_SIZES.find((step) => step >= calculatedStepSize) ||
+ PREDEFINED_STEP_SIZES[PREDEFINED_STEP_SIZES.length - 1]
+ );
+}
+```
+
+The pure functions live in `timeline-calculations.ts`, co-located with the component that uses them. _([View implementation](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/components/TraceTimeline/timeline-calculations.ts))_
+
+The component becomes cleaner:
+
+```typescript
+// index.tsx
+import {
+ calculateTimelineOffset,
+ calculateTimelineWidth,
+ calculateStepSize,
+} from "./timeline-calculations";
+
+function TraceTimeline() {
+ const { tree } = useTraceData();
+ const traceDuration = tree.latency ?? 0;
+
+ return (
+
+ {tree.children.map((node) => {
+ const offset = calculateTimelineOffset(
+ node.startTime,
+ tree.startTime,
+ traceDuration,
+ );
+ const width = calculateTimelineWidth(node.duration, traceDuration);
+
+ return ;
+ })}
+
+ );
+}
+```
+
+Tests live next to the functions they verify:
+
+```typescript
+// timeline-calculations.clienttest.ts
+import {
+ calculateTimelineOffset,
+ calculateStepSize,
+} from "./timeline-calculations";
+
+describe("calculateTimelineOffset", () => {
+ it("calculates offset for node starting 5 seconds into 10-second trace", () => {
+ const result = calculateTimelineOffset(
+ new Date("2024-01-01T00:00:05Z"),
+ new Date("2024-01-01T00:00:00Z"),
+ 10, // total span
+ 900, // scale width
+ );
+ expect(result).toBe(450); // 50% * 900px
+ });
+});
+
+describe("calculateStepSize", () => {
+ it("selects appropriate step size for 100-second trace", () => {
+ expect(calculateStepSize(100, 900)).toBe(15);
+ });
+});
+```
+
+Co-locating tests with code makes coverage visible. Looking at the `TraceTimeline/` folder, you immediately see which utilities have tests. _([View tests](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/components/TraceTimeline/timeline-calculations.clienttest.ts))_
+
+The decision to extract depends on complexity, reusability, and testability. Complex logic, code used across components, and deterministic functions benefit from extraction. Simple logic, lifecycle-coupled code, and hook-heavy operations stay in components.
+
+## Key Takeaways
+
+At Langfuse, we prioritize [shipping above little else](https://langfuse.com/handbook/how-we-work/principles). Engineers work on end-to-end ownership - planning, implementing, and supporting features without handoffs. This requires finding and modifying code quickly, with confidence that related pieces are discovered together. The layer separation from [Part 1](/blog/2025-02-react-architecture-part-1) created clear boundaries but introduced more files - addressing where those files live became the next challenge.
+
+The co-location principle provides a pragmatic framework: place related code together. Feature-level organization means one folder per feature. Component-level organization uses name prefixes for discovery. Tests live next to code. The structure requires discipline - more nesting, consistent naming - but enables developers and coding agents to find code quickly, refactor with confidence, and navigate without tribal knowledge.
+
+Layer separation created boundaries between concerns. Co-location organized those boundaries into navigable structures. [Part 3](/blog/2025-02-react-architecture-part-3) addresses performance: handling datasets that vary by orders of magnitude within these well-organized components.
+
+_Browse the actual implementation:_
+
+- _[TraceLogView/](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2/components/TraceLogView) - Feature folder example_
+- _[timeline-calculations.ts](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/components/TraceTimeline/timeline-calculations.ts) - Pure functions_
+- _[tree-building.ts](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/lib/tree-building.ts) - Complex pure logic_
+
+---
+
+**Building Langfuse?** We're growing our engineering team. If you value well-organized, maintainable code, [check out our open positions](https://langfuse.com/careers).
diff --git a/pages/blog/2025-02-react-architecture-part-3.mdx b/pages/blog/2025-02-react-architecture-part-3.mdx
new file mode 100644
index 0000000000..609a4853df
--- /dev/null
+++ b/pages/blog/2025-02-react-architecture-part-3.mdx
@@ -0,0 +1,202 @@
+---
+title: "Production-Grade React Components Part 3: Adaptive Optimization"
+date: 2025/02/19
+description: "Every optimization has tradeoffs. Learn how to make the decision at runtime based on data characteristics, preserving the best experience for both small and large datasets."
+tag: engineering, react, performance, architecture
+author: Michael
+---
+
+At Langfuse, we support diverse LLM observability use cases - from simple chatbot interactions with a handful of observations to multi-hour autonomous agents generating tens of thousands of observations. We've seen production traces with over 200,000 observations, while most remain small and straightforward to render.
+
+This creates a design challenge: components need to handle both typical cases and edge cases without degrading either experience. The typical approach optimizes for one end of the spectrum - either build for small data and break at scale, or optimize for scale and add unnecessary complexity for everyone.
+
+In [Part 1](/blog/2025-02-react-architecture-part-1), we covered layer separation. In [Part 2](/blog/2025-02-react-architecture-part-2), we explored co-location and pure functions. These architectural patterns provide the structure for maintainable components. Within that architecture, we still need to make performance decisions: which optimizations to apply, and when? This post shows how to accommodate performance optimizations by making the decision at runtime based on the data you're actually processing.
+
+## Adaptive Optimization
+
+Performance optimizations solve specific problems, but each comes with costs that disproportionately affect different dataset sizes. This post examines how adaptive patterns can balance these tradeoffs across three optimizations: virtualization, lazy data loading, and Web Worker offloading. Rather than applying optimizations universally, each optimization activates only when the data characteristics justify its cost.
+
+## Pattern 1: Conditional Virtualization
+
+### Context
+
+The trace log view displays observations in a table. Each row shows observation metadata, and users can expand rows to see full input/output data.
+
+### The Tradeoff
+
+Rendering thousands of DOM elements causes performance degradation and eventually browser crashes. Virtualization solves this by rendering only the visible viewport - typically 50-100 rows regardless of total dataset size.
+
+However, virtualization removes native browser features. Cmd+F can't find text that isn't in the DOM. Accessibility tools lose context. Print preview only shows the visible portion. For traces with dozens of observations, the browser handles all DOM elements without issue, making these tradeoffs unnecessary.
+
+### The Adaptive Solution
+
+The component checks observation count before rendering:
+
+```typescript
+// Determine virtualization based on observation count
+const isVirtualized = observations.length >= LOG_VIEW_VIRTUALIZATION_THRESHOLD;
+```
+
+Below the threshold (350 observations), all rows render to the DOM. Users get full browser search, accessibility support, and native browser features. Above the threshold, virtualization activates, trading those features for the ability to handle thousands of observations without performance issues.
+
+## Pattern 2: Lazy Loading with Adaptive Download
+
+### Context
+
+When users expand an observation row, the component displays the full input/output payloads. For traces with thousands of observations, each with large payloads, the component needs a data fetching strategy.
+
+### The Tradeoff
+
+Fetching all observation data upfront creates thousands of network requests and can freeze the browser. Lazy loading solves this - data fetches only when a user expands a specific row. Each expansion takes 50-100ms to load.
+
+This works well for browsing but complicates download operations. Users expect "Download trace" to include all data, but most observations haven't loaded yet. Fetching everything on demand for large traces takes too long and creates a poor user experience.
+
+### The Adaptive Solution
+
+All traces use lazy loading for browsing. The download strategy adapts based on trace size:
+
+```typescript
+// Determine download strategy based on observation count
+const isDownloadCacheOnly = observations.length >= LOG_VIEW_DOWNLOAD_THRESHOLD;
+```
+
+For small traces (under 350 observations), download fetches all data before exporting. Users get complete trace exports with all input/output payloads included.
+
+For large traces (over 350 observations), download uses cache-only mode. The export includes full data for expanded observations and metadata-only for unexpanded ones. Users see a clear indicator: "Downloaded trace data (cache only)".
+
+## Pattern 3: Conditional Web Worker Offloading
+
+### Context
+
+The JSON viewer needs to build a tree structure from JSON data before rendering. This involves parsing the JSON, creating tree nodes with parent-child relationships, and computing navigation offsets for efficient lookup.
+
+### The Tradeoff
+
+Building tree structures from large JSON datasets (100,000+ nodes) can take hundreds of milliseconds on the main thread, blocking all user interaction. Moving this work to a Web Worker keeps the UI responsive.
+
+However, worker creation, data serialization, and message passing add 10-20ms of overhead. For small JSON payloads that process in a few milliseconds, the worker adds latency and displays loading spinners for operations that should feel instant.
+
+### The Adaptive Solution
+
+The component estimates tree size, then chooses between synchronous and asynchronous execution. This fits naturally into the orchestration layer from [Part 1](/blog/2025-02-react-architecture-part-1) - the layer that combines data fetching and transformations while controlling re-render boundaries.
+
+First, estimate the size without deep traversal:
+
+```typescript
+export function estimateNodeCount(data: unknown): number {
+ if (data === null || data === undefined) return 1;
+ if (Array.isArray(data)) return data.length;
+ if (typeof data === "object") return Object.keys(data).length;
+ return 1;
+}
+```
+
+Then, provide both synchronous and asynchronous paths:
+
+```typescript
+export function useTreeState(data, config) {
+ // Estimate size once
+ const dataSize = useMemo(() => estimateNodeCount(data), [data]);
+
+ // Path 1: Synchronous build for small datasets
+ const syncTree = useMemo(() => {
+ if (dataSize > TREE_BUILD_THRESHOLD) return null;
+ return buildTreeFromJSON(data, config);
+ }, [data, dataSize, config]);
+
+ // Path 2: Web Worker build for large datasets
+ const asyncTreeQuery = useQuery({
+ queryKey: ["tree-build", data, config],
+ queryFn: () => buildTreeInWorker(data, config),
+ enabled: dataSize > TREE_BUILD_THRESHOLD,
+ staleTime: Infinity,
+ });
+
+ // Return whichever path was used
+ const tree = syncTree || asyncTreeQuery.data;
+ const isBuilding =
+ dataSize > TREE_BUILD_THRESHOLD && asyncTreeQuery.isLoading;
+
+ return { tree, isBuilding };
+}
+```
+
+_([View implementation](https://github.com/langfuse/langfuse/blob/main/web/src/components/ui/AdvancedJsonViewer/hooks/useTreeState.ts))_
+
+The same algorithm runs in both paths. The only difference is execution context. Small datasets process synchronously and return instantly. Large datasets process in a Web Worker, keeping the UI responsive. Components receive an identical API regardless of which path was taken.
+
+The threshold of 10,000 nodes reflects the tradeoff between worker overhead (10-20ms) and UI blocking (hundreds of milliseconds for large datasets). Below that size, the overhead outweighs the benefit. Above that size, keeping the UI responsive justifies the cost.
+
+## Configuration Centralization
+
+All thresholds and settings live in a single configuration file:
+
+```typescript
+export const TRACE_VIEW_CONFIG = {
+ logView: {
+ virtualizationThreshold: 350,
+ downloadThreshold: 350,
+ rowHeight: {
+ collapsed: 28,
+ expanded: 150,
+ },
+ maxIndentDepth: 5,
+ batchFetch: {
+ concurrency: 10,
+ },
+ },
+} as const;
+```
+
+_([View implementation](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/config/trace-view-config.ts))_
+
+## When to Use Adaptive Patterns
+
+Adaptive optimization makes sense when:
+
+**Data size varies significantly** - If 95% of your users have datasets under 100 items but 5% have datasets over 10,000 items, you have variance worth addressing. If everyone's datasets are similar in size, optimize for that size.
+
+**Optimization has meaningful tradeoffs** - If the optimization only improves performance without removing features, apply it universally. Adaptive patterns are for cases where the optimization helps large datasets but hurts small ones.
+
+**Threshold is measurable and stable** - The decision point (number of rows, data size, nesting depth) should be something you can calculate quickly and reliably. If the threshold depends on complex heuristics or changes frequently, the pattern adds more complexity than value.
+
+**Maintenance cost is justified** - Running two code paths means testing two scenarios. If 99% of users hit the "small data" path, the benefit of handling the 1% edge case needs to outweigh the maintenance burden.
+
+## Tradeoffs in Practice
+
+For traces under the threshold, users get full browser search, expand all functionality, and complete downloads. The UI behaves like a standard web page.
+
+For traces above the threshold, virtualization activates. Users lose browser search and expand all, but rendering and scrolling remain smooth. Downloads use the cache-only approach.
+
+For traces with thousands of observations, all optimizations activate. These traces would be unusable without virtualization and lazy loading. The feature limitations are acceptable tradeoffs for making the traces viewable.
+
+The alternative approaches create problems:
+
+Optimizing only for large data means most users lose features and see loading spinners for operations that could be instant.
+
+Optimizing only for small data means traces with thousands of observations cause browser crashes or long freezes.
+
+## Conclusion
+
+This concludes our three-part series on production-grade React components at Langfuse:
+
+- [Part 1](/blog/2025-02-react-architecture-part-1) established layer separation - organizing code into data fetching, pure transformation, context orchestration, and presentation layers
+- [Part 2](/blog/2025-02-react-architecture-part-2) covered co-location and pure functions - keeping related code together and extracting testable business logic
+- Part 3 introduced adaptive optimization - making performance decisions at runtime based on actual data characteristics
+
+These patterns work together. Layer separation provides clear boundaries for where optimizations apply. Co-location keeps threshold logic near the code it affects. Pure functions make performance testing straightforward.
+
+At Langfuse, our engineering culture emphasizes high ownership and shipping with confidence. Engineers work across the full stack, from database queries to React components. The patterns in this series support that culture by making components predictable, maintainable, and capable of handling the extreme variance we see in production LLM applications.
+
+When developers use Langfuse to debug their applications, they don't know whether they'll open a trace with 10 observations or 10,000 observations. Our job is to make both cases work well. Adaptive optimization lets us preserve the best possible experience for the majority while gracefully handling the edge cases that would otherwise make the product unusable.
+
+_View the complete implementations:_
+
+- _[useTreeState.ts](https://github.com/langfuse/langfuse/blob/main/web/src/components/ui/AdvancedJsonViewer/hooks/useTreeState.ts) - Adaptive tree building_
+- _[TraceLogView.tsx](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/components/TraceLogView/TraceLogView.tsx) - Threshold-based virtualization_
+- _[trace-view-config.ts](https://github.com/langfuse/langfuse/blob/main/web/src/components/trace2/config/trace-view-config.ts) - Centralized thresholds_
+- _[trace2/](https://github.com/langfuse/langfuse/tree/main/web/src/components/trace2) - Full application of all three parts_
+
+---
+
+**Building Langfuse?** We're growing our engineering team. If you care about performance, user experience, and pragmatic engineering, [check out our open positions](https://langfuse.com/careers).