Skip to content

feat: Slack file attachments, file context in AI, probe v0.6.0-rc240#375

Merged
buger merged 5 commits intomainfrom
feat/enable-execute-plan
Feb 18, 2026
Merged

feat: Slack file attachments, file context in AI, probe v0.6.0-rc240#375
buger merged 5 commits intomainfrom
feat/enable-execute-plan

Conversation

@buger
Copy link
Contributor

@buger buger commented Feb 17, 2026

Summary

  • Slack file attachments via --- filename.ext --- delimiters: AI output containing --- report.csv --- sections gets extracted, uploaded as Slack file snippets via files.uploadV2, and replaced with placeholders in the message text
  • File attachment context in AI: Slack file_share messages now thread file metadata (name, mimetype, size, URL) through the full pipeline into the AI's XML conversation context
  • JSON recovery for output buffer trailing content: When ProbeAgent appends output buffer data after JSON, the parser now recovers the JSON and preserves the trailing content in the text field
  • Probe v0.6.0-rc240: Fixes output buffer destruction during schema correction (Output buffer destroyed by recursive answer() during schema correction probe#423)

Changes

Slack file upload pipeline

  • src/slack/markdown.tsextractFileSections() / replaceFileSections() to parse --- filename.ext --- delimiters
  • src/frontends/slack-frontend.ts — extract file sections, upload via Slack API, replace with placeholders; normalize literal \n from DSL output
  • src/slack/client.tsapiForm() for form-urlencoded Slack API calls (required by file upload endpoints); files in fetchThreadReplies

File attachment context

  • src/types/bot.tsSlackFileAttachment interface, files field on NormalizedMessage
  • src/slack/adapter.ts — thread files through SlackMessage, fetchConversation, normalizeSlackMessage
  • src/slack/socket-runner.ts — pass ev.files, allow file_share subtype
  • src/ai-review-service.tsformatFilesXml() renders file metadata in AI XML context

JSON recovery

  • src/ai-review-service.ts — recover valid JSON prefix when trailing output buffer content causes parse errors

Other

  • package.json@probelabs/probe ^0.6.0-rc240

Test plan

  • tests/unit/slack-file-sections.test.ts — 25 tests for extraction, replacement, edge cases, literal \n normalization
  • tests/unit/providers/mcp-allowed-tools-mismatch.test.ts — allowed tools tests
  • All 51 pre-commit tests pass

🤖 Generated with Claude Code

buger and others added 4 commits February 16, 2026 07:52
Support the execute_plan DSL orchestration tool gated by enableExecutePlan
boolean flag, matching the existing enableDelegate pattern.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Evaluate ai_allowed_tools_js at step level to dynamically compute the
allowedTools list for AI checks, following the same pattern as
ai_mcp_servers_js and ai_bash_config_js.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…robe v0.6.0-rc240

- Add extractFileSections/replaceFileSections to parse --- filename.ext --- delimiters
- Upload extracted file content as Slack file snippets via files.uploadV2
- Normalize literal \n from DSL output buffer before file section extraction
- Add apiForm() for form-urlencoded Slack API calls (required by file upload endpoints)
- Allow file_share subtype in socket-runner so users can share attachments
- Thread file attachment metadata (SlackFileAttachment) through the full pipeline:
  SlackClient → SlackAdapter → NormalizedMessage → AI XML context
- Add formatFilesXml() to render file metadata in AI conversation context
- Add JSON recovery for ProbeAgent output buffer trailing content
- Update @probelabs/probe to v0.6.0-rc240 (fixes output buffer destruction
  during schema correction — see probelabs/probe#423)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@probelabs
Copy link
Contributor

probelabs bot commented Feb 17, 2026

PR Overview: Slack File Attachments, File Context in AI, Probe v0.6.0-rc245

Summary

This PR introduces three major features:

  1. Slack file attachments via --- filename.ext --- delimiters - AI output containing delimited file sections gets extracted, uploaded to Slack as file snippets via files.uploadV2, and replaced with placeholders in the message text

  2. File attachment context in AI - Slack file_share messages now thread file metadata (name, mimetype, size, URL) through the full pipeline into the AI's XML conversation context

  3. JSON recovery for output buffer trailing content - When ProbeAgent appends output buffer data after JSON, the parser now recovers the JSON and preserves the trailing content in the text field

  4. Probe v0.6.0-rc245 - Upgraded from rc233, includes output buffer fixes and SandboxJS dependency changes

Files Changed Analysis

25 files changed (+2680/-219 lines)

Core Feature Files

  • src/slack/markdown.ts (+89 lines) - New extractFileSections() and replaceFileSections() functions for parsing --- filename.ext --- delimiters
  • src/frontends/slack-frontend.ts (+48 lines) - File extraction, upload via Slack API, placeholder replacement, literal normalization
  • src/slack/client.ts (+26/-4) - New apiForm() for form-urlencoded Slack API calls (required by file upload endpoints), files in fetchThreadReplies
  • src/slack/adapter.ts (+25/-3) - Thread files through SlackMessage, fetchConversation, normalizeSlackMessage
  • src/slack/socket-runner.ts (+8/-2) - Pass ev.files, allow file_share subtype
  • src/types/bot.ts (+13 lines) - New SlackFileAttachment interface, files field on NormalizedMessage

AI/Review Service

  • src/ai-review-service.ts (+115/-23) - New formatFilesXml() for file metadata in AI XML context, JSON recovery logic for trailing output buffer content, enableExecutePlan config support, <<<RAW_OUTPUT>>> block extraction

MCP/Policy Tool Fixes

  • src/providers/ai-check-provider.ts (+123/-9) - New evaluateAllowedToolsJs() for dynamic tool filtering, intersectAllowedTools() for policy intersection, workflow tool name override support
  • src/providers/workflow-tool-executor.ts (+22/-3) - name override in WorkflowToolReference and createWorkflowToolDefinition(), RAW_OUTPUT wrapping for workflow tool results
  • src/providers/workflow-check-provider.ts (+21 lines) - _rawOutput propagation from workflow steps

Configuration

  • src/generated/config-schema.ts (+437/-83) - Schema updates for enableExecutePlan, sandbox configs, scheduler HA/storage, policy roles
  • src/types/config.ts (+4/-1) - enableExecutePlan in AIProviderConfig
  • src/enterprise/policy/policy-input-builder.ts (+2 lines) - enableExecutePlan in OPA input

TUI/Trace Viewer Improvements

  • src/tui/components/trace-viewer.ts (+455/-57) - Major rewrite: interactive navigation (j/k, Enter), detail view, span enrichment for AI/tool calls, render coalescing
  • src/tui/chat-tui.ts (+49/-16) - Render coalescing (_scheduleRender), pending logs cap, input bar pause/resume
  • src/tui/components/input-bar.ts (+28/-4) - New pause()/resume() methods for non-chat tabs

Other Fixes

  • src/state-machine/states/level-dispatch.ts (+9/-1) - Output template schema detection fix
  • src/telemetry/file-span-exporter.ts (+1/-1) - Parent span ID serialization fix

Tests

  • tests/unit/slack-file-sections.test.ts (+332 lines) - 25 tests for extraction, replacement, edge cases, literal normalization
  • tests/unit/providers/mcp-allowed-tools-mismatch.test.ts (+343 lines) - Tests for MCP tool name vs allowedTools mismatch bug
  • tests/unit/raw-output-extraction.test.ts (+133 lines) - Tests for <<<RAW_OUTPUT>>> extraction
  • tests/unit/probe-dsl-chunkbykey.test.ts (+353 lines) - Tests for DSL chunkByKey and LLM auto-parse
  • tests/unit/providers/ai-check-provider.test.ts (+38 lines) - enableExecutePlan flag test

Dependencies

  • package.json & package-lock.json - @probelabs/probe upgraded from ^0.6.0-rc233 to ^0.6.0-rc245

Architecture & Impact Assessment

What This PR Accomplishes

  1. Slack File Upload Pipeline: Enables AI agents to generate structured data files (CSV, JSON, etc.) that are automatically uploaded to Slack as file attachments instead of cluttering the message text.

  2. File Context Awareness: When users share files in Slack, the AI now receives file metadata in its conversation context, enabling file-aware responses.

  3. Robust JSON Parsing: Fixes a critical bug where ProbeAgent's output buffer appending would corrupt JSON responses.

  4. MCP Tool Name Fix: Resolves a mismatch between workflow tool registration names and allowedTools whitelist entries.

  5. DSL Output Propagation: RAW_OUTPUT blocks from execute_plan now bypass LLM processing and reach the user directly.

Key Technical Changes

  1. File Section Extraction (src/slack/markdown.ts):

    • Regex-based parsing of --- filename.ext --- delimiters
    • Back-to-front replacement to preserve string indices
    • Handles unclosed sections, multiple files, edge cases
  2. Slack File Upload (src/frontends/slack-frontend.ts):

    • Extracts file sections before Mermaid processing
    • Normalizes literal escape sequences from DSL output
    • Uploads via files.uploadV2 API (requires apiForm() for form-urlencoded)
    • Replaces with placeholders indicating success/failure
  3. File Metadata Pipeline:

    • SlackSocketRunnerSlackAdapterAIReviewService
    • SlackFileAttachment interface carries metadata through the stack
    • formatFilesXml() renders XML fragment for AI context
  4. JSON Recovery (src/ai-review-service.ts):

    • Detects "after JSON at position N" parse errors
    • Extracts valid JSON prefix
    • Appends trailing content to text field
  5. RAW_OUTPUT Extraction:

    • Extracts <<<RAW_OUTPUT>>> blocks before JSON parsing
    • Attaches as _rawOutput field on output object
    • Propagates through workflow tool results

Affected System Components

graph TD
    A[AI Agent Output] --> B[slack-frontend.ts]
    B --> C[extractFileSections]
    C --> D[files.uploadV2 API]
    D --> E[Slack File Attachment]
    E --> F[Placeholder in Message]
    
    G[User File Share] --> H[SlackSocketRunner]
    H --> I[SlackAdapter]
    I --> J[NormalizedMessage.files]
    J --> K[AIReviewService.formatFilesXml]
    K --> L[AI Context XML]
    
    M[ProbeAgent Response] --> N[JSON.parse]
    N -->|Parse Error| O[Recovery Logic]
    O --> P[Extract JSON Prefix]
    O --> Q[Append Trailing to text]
    
    R[Workflow Tool] --> S[executeWorkflowAsTool]
    S --> T[_rawOutput Detection]
    T --> U[<<<RAW_OUTPUT>>> Wrapping]
    U --> V[ProbeAgent Extraction]
Loading

MCP Tool Name Mismatch Fix

Problem: Workflow tools registered via ai_mcp_servers_js used workflow.id as the tool name, but allowedTools whitelists used the server entry name (e.g., "code-explorer"), causing tools to be filtered out.

Solution: Added name override to WorkflowToolReference so tools register under the server entry name that matches allowedTools entries.

graph LR
    A[ai_mcp_servers_js] -->|server entry name| B[code-explorer]
    B -->|workflow| C[tyk-code-talk]
    C -->|old: workflow.id| D[tool: tyk-code-talk]
    D -->|allowedTools mismatch| E[❌ Filtered out]
    
    B -->|new: name override| F[tool: code-explorer]
    F -->|allowedTools match| G[✅ Available]
Loading

Scope Discovery & Context Expansion

Related Files (Inferred)

Based on the changes, the following areas are likely affected:

  1. Slack Integration Tests: tests/integration/slack*.test.ts - May need updates for file upload scenarios
  2. AI Review Tests: tests/unit/ai-review-service.test.ts - JSON recovery behavior
  3. MCP Server Tests: tests/unit/mcp*.test.ts - Tool name registration
  4. Config Defaults: defaults/visor.yaml, defaults/*.yaml - May need enableExecutePlan examples
  5. Documentation: docs/slack.md, docs/ai-configuration.md - File attachment feature docs

Configuration Impact

  • New ai.enableExecutePlan boolean flag enables DSL orchestration tool
  • ai_allowed_tools_js can now dynamically compute allowed tools
  • Sandbox configuration schema expanded with resource limits, cache, network options
  • Scheduler HA configuration added for multi-node deployments

Breaking Changes

None - all changes are additive or bug fixes.

Performance Considerations

  • File uploads add latency to Slack responses (network I/O)
  • TUI trace viewer now caps at 500 spans to prevent rendering slowdowns
  • Render coalescing (50ms debounce) prevents TUI freezing under heavy output
  • Sequential file uploads could be parallelized for better performance

Review Notes

Key Areas to Review

  1. Security: File upload validation (filename sanitization, size limits, content type validation)
  2. Error Handling: File upload failures, JSON recovery edge cases
  3. Performance: Sequential vs parallel file uploads, large trace file handling
  4. Type Safety: undefined as unknown as type assertions
  5. Test Coverage: Edge cases for file extraction, malformed input handling

Testing

  • 51 pre-commit tests pass
  • 25 new tests for file section extraction
  • 343 new tests for MCP tool name mismatch
  • 133 new tests for RAW_OUTPUT extraction
  • 353 new tests for DSL chunkByKey
Metadata
  • Review Effort: 4 / 5
  • Primary Label: feature

Powered by Visor from Probelabs

Last updated: 2026-02-18T11:11:03.424Z | Triggered by: pr_updated | Commit: 2bf89a3

💡 TIP: You can chat with Visor using /visor ask <your question>

@probelabs
Copy link
Contributor

probelabs bot commented Feb 17, 2026

Security Issues (6)

Severity Location Issue
🔴 Critical visor/src/ai-review-service.ts:1354
The escapeXml() method is a no-op that returns input unchanged, allowing XML injection attacks in formatFilesXml(). File metadata (name, mimetype, filetype, URL) from Slack messages is directly interpolated into XML without proper escaping, enabling attackers to inject malicious XML entities or break XML structure.
💡 SuggestionImplement proper XML escaping in escapeXml() method to escape &, <, >, ", and ' characters. The script-check-provider.ts already has a correct implementation that should be reused.
🔧 Suggested Fix
private escapeXml(text: string): string {
  if (text == null) return '';
  return String(text)
    .replace(/&/g, '&amp;')
    .replace(/</g, '&lt;')
    .replace(/>/g, '&gt;')
    .replace(/"/g, '&quot;')
    .replace(/'/g, '&apos;');
}
🟠 Error visor/src/slack/markdown.ts:305
The extractFileSections() function uses a regex /^--- ([\w][\w.\-]*\.\w+) ---$/gm that allows arbitrary filenames without validation. An attacker could inject path traversal sequences (../) or absolute paths in the filename delimiter, potentially causing files to be written to unintended locations when uploaded.
💡 SuggestionAdd filename validation to reject paths containing directory traversal sequences (../), absolute paths (/), or special characters. Restrict filenames to alphanumeric characters, hyphens, underscores, and single dots for extension.
🔧 Suggested Fix
const delimRegex = /^--- ([a-zA-Z0-9][a-zA-Z0-9_-]*\.[a-zA-Z0-9]{1,10}) ---$/gm;
🟠 Error visor/src/frontends/slack-frontend.ts:614
File content from extractFileSections() is converted to Buffer and uploaded without size limits or content validation. An attacker could generate arbitrarily large file sections causing resource exhaustion or upload malicious content (executables, scripts) to Slack.
💡 SuggestionAdd maximum file size limit (e.g., 10MB) and validate file content type against expected extensions. Reject files exceeding limits or with suspicious content patterns.
🔧 Suggested Fix
const MAX_FILE_SIZE = 10 * 1024 * 1024; // 10MB
const buffer = Buffer.from(section.content, 'utf-8');
if (buffer.length > MAX_FILE_SIZE) {
  ctx.logger.warn(`[slack-frontend] file ${section.filename} exceeds size limit (${buffer.length} bytes)`);
  continue;
}
🟡 Warning visor/src/slack/adapter.ts:103
Slack file attachment URLs (url_private, permalink) are passed through without validation. These URLs may contain sensitive tokens or authentication parameters that could be leaked to AI models or logs, exposing private file access credentials.
💡 SuggestionSanitize URLs to remove sensitive query parameters (e.g., tokens, secrets) before including in AI context. Consider redacting url_private entirely or replacing with a placeholder indicating a file was shared.
🔧 Suggested Fix
normalized.files = msg.files.map(
  (f: any): SlackFileAttachment => ({
    id: String(f.id || ''),
    name: f.name,
    mimetype: f.mimetype,
    filetype: f.filetype,
    // Redact sensitive URLs - only indicate file exists
    url_private: '[REDACTED]',
    permalink: '[REDACTED]',
    size: f.size,
  })
🟡 Warning visor/src/providers/ai-check-provider.ts:2108
The evaluateAllowedToolsJs() function executes user-provided JavaScript expressions with access to sensitive context (outputs, env, memory). While a sandbox is used, the buildSafeEnv() method is not shown in this diff, and the sandbox may not adequately isolate malicious code that could access process.env or modify memory state.
💡 SuggestionEnsure buildSafeEnv() properly sanitizes environment variables to remove API keys, secrets, and tokens. Add timeout limits to sandbox execution and monitor for suspicious patterns (infinite loops, memory bombs).
🔧 Suggested Fix
N/A - Review buildSafeEnv() implementation to ensure proper sanitization
🟡 Warning visor/src/ai-review-service.ts:2520
JSON recovery logic uses parseInt() on regex match results without validation before substring operations. A maliciously crafted error message could manipulate the position value to extract arbitrary portions of the sanitized response, potentially exposing sensitive data from other parts of the JSON response.
💡 SuggestionValidate that the parsed position is within valid bounds (0 to sanitizedResponse.length) before using it in substring operations. Reject obviously invalid positions (negative, beyond string length).
🔧 Suggested Fix
const pos = parseInt(trailingMatch[1], 10);
if (isNaN(pos) || pos < 0 || pos > sanitizedResponse.length) {
  log(`⚠️ Invalid JSON position in error message: ${trailingMatch[1]}`);
  // Fall through to plain text handling
} else {
  try {
    reviewData = JSON.parse(sanitizedResponse.substring(0, pos));

Architecture Issues (10)

Severity Location Issue
🟢 Info src/providers/workflow-tool-executor.ts:45
The 'name' override field in WorkflowToolReference is a workaround for the MCP tool name mismatch bug. While it fixes the immediate problem, it creates an inconsistency where the registered tool name (name) differs from the workflow identifier (__workflowId). This dual identity could cause confusion in debugging and logging.
💡 SuggestionConsider whether the tool registration should always use the server entry name consistently, or if workflow.id should be used throughout. The dual identity (name vs __workflowId) is a workaround that adds complexity.
🟢 Info src/providers/ai-check-provider.ts:2094
The evaluateAllowedToolsJs function introduces dynamic JavaScript evaluation for tool filtering, which is a powerful pattern but also a security and complexity concern. This is the third dynamic JS evaluation pattern in the codebase (along with ai_mcp_servers_js and ai_custom_tools_js), suggesting a trend toward configuration-as-code that may be hard to debug and test.
💡 SuggestionConsider consolidating these dynamic evaluation patterns into a single, well-documented framework with clear security boundaries, debugging support, and test utilities. The proliferation of separate _js evaluation functions increases maintenance burden.
🟢 Info src/tui/chat-tui.ts:122
The _scheduleRender coalescing (50ms debounce) is a performance workaround for blessed's render() being called too frequently. While effective, this is a band-aid over the underlying issue: the code calls screen.render() in too many places (30+ calls throughout the file).
💡 SuggestionConsider a more fundamental solution: (a) implement a reactive state management system that batches render calls automatically, or (b) refactor to reduce the number of explicit render() calls by using blessed's built-in update mechanisms more effectively.
🟢 Info src/tui/components/trace-viewer.ts:44
The MAX_DISPLAY_SPANS (500) and MAX_TREE_DEPTH (10) constants are arbitrary special-case limits. While they prevent performance issues, they also silently drop data without clear user feedback. The warning message only shows when spans are dropped, but there's no indication of depth limiting.
💡 SuggestionConsider making these limits configurable and providing better user feedback when data is truncated. The current approach of silently capping spans/depth could hide important information during debugging.
🟢 Info src/tui/components/trace-viewer.ts:59
The trace viewer maintains significant interactive state (flatLines, selectableIndices, selectedPos, detailMode, detailSpan) which increases complexity. The render() method builds flatLines, then renderContent() uses it, creating a two-phase rendering pattern that's harder to reason about.
💡 SuggestionConsider separating the view model (flatLines structure) from the rendering logic more clearly. The current mixing of data transformation and rendering in render() makes it harder to test and modify independently.
🟡 Warning src/slack/markdown.ts:296
The extractFileSections function treats a second delimiter with the same filename as starting a NEW section instead of closing the previous one. This creates a special case where '--- report.csv ---\ncontent\n--- report.csv ---\nmore' produces two sections instead of one closed section. The test explicitly expects this behavior, but it's counterintuitive and creates ambiguity about whether delimiters are open/close markers or section starters.
💡 SuggestionConsider making the delimiter behavior explicit: either (a) treat same-name delimiters as closing markers, or (b) document clearly that delimiters ALWAYS start new sections and sections are implicitly closed by the next delimiter or EOF. The current hybrid approach creates confusion.
🟡 Warning src/frontends/slack-frontend.ts:602
The literal \n normalization (processedText.replace(/\\n/g, '\n')) is a special-case workaround for AI-generated DSL output. This creates a tight coupling between the frontend and ProbeAgent's output buffer behavior. If ProbeAgent changes how it handles newlines, this normalization will break.
💡 SuggestionMove this normalization into the ProbeAgent output buffer handling or make it a configurable option. The frontend shouldn't need to know about ProbeAgent's internal string escaping behavior.
🟡 Warning src/ai-review-service.ts:2448
The JSON recovery logic for trailing content is a special-case workaround for ProbeAgent's output buffer appending behavior. The code specifically looks for 'after JSON at position N' error messages and attempts recovery. This creates tight coupling with ProbeAgent's implementation details.
💡 SuggestionConsider a more general solution: (a) ProbeAgent should use a structured format (like multipart messages) instead of appending raw content after JSON, or (b) define a clear protocol for how output buffer content is communicated (e.g., separate field, envelope format). The current regex-based recovery is fragile.
🟡 Warning src/slack/adapter.ts:90
The file attachment metadata flow is inconsistent: SlackSocketRunner passes ev.files to fetchConversation, which passes it to normalizeSlackMessage, which maps it to SlackFileAttachment. However, the thread cache (CachedThread interface) doesn't explicitly declare files as part of the cached data structure, potentially causing cache invalidation issues.
💡 SuggestionEnsure the CachedThread interface explicitly accounts for files in its size calculation and invalidation logic. The current implementation may not properly handle cache updates when files are added to messages.
🟡 Warning src/providers/ai-check-provider.ts:2225
The intersectAllowedTools method duplicates logic that ProbeAgent already performs internally for resolving glob patterns. The comment 'ProbeAgent resolves those patterns' acknowledges this, but the method still performs glob detection to decide whether to intersect. This creates a fragile coupling: if ProbeAgent's glob handling changes, this code may break.
💡 SuggestionConsider either (a) always passing configTools through to ProbeAgent and letting it handle all resolution, or (b) defining a clear contract about what Visor is responsible for vs. what ProbeAgent handles. The current hybrid approach is fragile.

Performance Issues (9)

Severity Location Issue
🟢 Info visor/src/tui/chat-tui.ts:248
pendingLogs.slice(-ChatTUI.MAX_PENDING_LOGS) creates a new array copy every time logs are capped. For large log volumes, this causes repeated memory allocations.
💡 SuggestionUse a circular buffer or drop oldest entries by shifting instead of slicing. Consider using a deque-like structure for better performance.
🔧 Suggested Fix
// Use shift to remove oldest entries instead of slice
while (this.pendingLogs.length > ChatTUI.MAX_PENDING_LOGS) {
  this.pendingLogs.shift();
}
🟢 Info visor/src/tui/components/trace-viewer.ts:357
When spans exceed MAX_DISPLAY_SPANS, the code uses this.spans.slice(-TraceViewer.MAX_DISPLAY_SPANS) which creates a copy of 500 span objects. For large trace files with thousands of spans, this repeated copying on every render is wasteful.
💡 SuggestionStore the display slice as a cached property and only update when new spans are added, or use a circular buffer approach to avoid copying.
🔧 Suggested Fix
// Cache the display spans to avoid repeated slicing
if (!this._displaySpans || this._displaySpans.length < this.spans.length) {
  this._displaySpans = this.spans.length > TraceViewer.MAX_DISPLAY_SPANS
    ? this.spans.slice(-TraceViewer.MAX_DISPLAY_SPANS)
    : this.spans;
}
const displaySpans = this._displaySpans;
🟢 Info visor/src/tui/components/trace-viewer.ts:617
The render() method builds flatLines array by repeatedly pushing objects with string concatenation (padEnd, template literals). For large trace trees, this creates many temporary string objects.
💡 SuggestionConsider using a StringBuilder pattern or pre-allocating arrays with known sizes. However, the current approach is acceptable for the capped 500 span limit.
🟢 Info visor/src/slack/markdown.ts:272
extractFileSections uses regex.exec() in a while loop with the global flag, then manually builds an array. This is less efficient than using String.matchAll() which returns an iterator directly.
💡 SuggestionUse matchAll() for cleaner and potentially more efficient iteration over regex matches.
🔧 Suggested Fix
const delimRegex = /^--- ([\w][\w.\-]*\.\w+) ---$/gm;
const delimiters: { filename: string; start: number; end: number }[] = [];
for (const match of text.matchAll(delimRegex)) {
  delimiters.push({
    filename: match[1],
    start: match.index,
    end: match.index + match[0].length,
  });
}
🟢 Info visor/src/providers/ai-check-provider.ts:2071
evaluateAllowedToolsJs builds a outputs object by iterating dependencyResults and creating a new object with all outputs. For workflows with many dependencies, this creates unnecessary object copies.
💡 SuggestionConsider passing the Map directly to the sandbox context if supported, or use Object.fromEntries() for more efficient conversion.
🔧 Suggested Fix
const outputs = Object.fromEntries(
  Array.from(dependencyResults.entries(), ([checkId, result]) => {
    const summary = result as ReviewSummary & { output?: unknown };
    return [checkId, summary.output !== undefined ? summary.output : summary];
  })
);
🟡 Warning visor/src/frontends/slack-frontend.ts:614
File uploads are performed sequentially in a for loop, causing unnecessary latency. Each upload waits for the previous one to complete before starting the next.
💡 SuggestionUse Promise.all() or Promise.allSettled() to upload files concurrently. This will significantly reduce total upload time when multiple files are present.
🔧 Suggested Fix
const uploadPromises = fileSections.map(async (section, idx) => {
  try {
    const buffer = Buffer.from(section.content, 'utf-8');
    const uploadResult = await slack.files.uploadV2({
      content: buffer,
      filename: section.filename,
      channel,
      thread_ts: threadTs,
      title: section.filename,
    });
    return { idx, success: uploadResult.ok };
  } catch (e) {
    ctx.logger.warn(
      `[slack-frontend] failed to upload file ${section.filename}: ${
        e instanceof Error ? e.message : String(e)
      }`
    );
    return { idx, success: false };
  }
});

const uploadResults = await Promise.all(uploadPromises);
const uploadedFileIndices = uploadResults.filter(r => r.success).map(r => r.idx);

🟡 Warning visor/src/frontends/slack-frontend.ts:561
Mermaid diagram rendering and uploads are performed sequentially in a for loop, causing unnecessary latency.
💡 SuggestionUse Promise.all() to render and upload diagrams concurrently. Note that renderMermaidToPng spawns external processes, so concurrency limits may be needed to avoid overwhelming the system.
🔧 Suggested Fix
const renderPromises = diagrams.map(async (diagram, idx) => {
  try {
    ctx.logger.info(`[slack-frontend] rendering mermaid diagram ${idx + 1}...`);
    const pngBuffer = await renderMermaidToPng(diagram.code);
    if (pngBuffer) {
      ctx.logger.info(
        `[slack-frontend] rendered diagram ${idx + 1}, size=${pngBuffer.length} bytes, uploading...`
      );
      const filename = `diagram-${idx + 1}.png`;
      const uploadResult = await slack.files.uploadV2({
        content: pngBuffer,
        filename,
        channel,
        thread_ts: threadTs,
        title: `Diagram ${idx + 1}`,
      });
      return { idx, success: uploadResult.ok };
    }
    return { idx, success: false };
  } catch (e) {
    ctx.logger.warn(
      `[slack-frontend] failed to render/upload mermaid diagram ${idx + 1}: ${
        e instanceof Error ? e.message : String(e)
      }`
    );
    return { idx, success: false };
  }
});

const uploadedResults = await Promise.all(renderPromises);
const uploadedCount = uploadedResults.filter(r => r.success).map(r => r.idx);

🟡 Warning visor/src/ai-review-service.ts:2476
RAW_OUTPUT_RE.exec() is called in a while loop to extract all blocks, then response.replace() is called on the entire response. This creates unnecessary intermediate strings and processes the entire response multiple times.
💡 SuggestionUse String.replace() with a callback function to extract and remove blocks in a single pass, or use matchAll() for better performance with large responses.
🔧 Suggested Fix
const rawOutputBlocks: string[] = [];
let responseForParsing = response.replace(RAW_OUTPUT_RE, (match, content) => {
  rawOutputBlocks.push(content);
  return '';
});
🟡 Warning visor/src/tui/chat-tui.ts:806
The _flushPendingLogs method uses shiftLine(0) in a while loop to trim old lines. Each shift operation has O(n) complexity as it reindexes the entire array, making this O(n*m) where m is the number of lines to remove.
💡 SuggestionUse array slicing to remove multiple lines at once, or implement a circular buffer. For large log volumes, consider using a dedicated ring buffer data structure.
🔧 Suggested Fix
// Trim old lines more efficiently by slicing
const lines = (this.logsBox as any).getLines();
if (lines.length > maxLines) {
  (this.logsBox as any).setLines(lines.slice(lines.length - maxLines));
}

Quality Issues (19)

Severity Location Issue
🟠 Error src/ai-review-service.ts:2478
JSON recovery logic uses a regex match on error message ('after JSON at position (\d+)') which is fragile and depends on V8's specific error message format. Different JavaScript engines or V8 versions may have different error message formats.
💡 SuggestionUse a more robust approach: try parsing incrementally from the end, find the last valid JSON position by binary search, or use a proper JSON parser that can report the exact error position.
🟢 Info src/tui/chat-tui.ts:105
MAX_PENDING_LOGS is set to 5000 and _scheduleRender uses 50ms debounce. These are magic numbers without clear justification. The 50ms debounce may cause perceptible lag in UI updates.
💡 SuggestionDocument why these specific values were chosen. Consider making them configurable or adding comments explaining the trade-offs.
🟢 Info tests/unit/raw-output-extraction.test.ts:28
Test uses hardcoded expected values ('Found 3 customers using JWT', 'Acme,JWT\nBeta,HMAC') without explaining how these values were derived or what they represent.
💡 SuggestionUse named constants or add comments explaining the test data. For example: const EXPECTED_SUMMARY = 'Found 3 customers using JWT'; const RAW_CSV_DATA = 'customer,auth_type\n...'
🟡 Warning src/slack/markdown.ts:296
The regex pattern /^--- ([\w][\w.\-]*\.\w+) ---$/gm for extracting file sections has potential issues. It requires filenames to start with a word character (\w) but doesn't validate that the filename is reasonable. This could match unintended patterns like '--- .hidden ---' or fail on valid filenames with special characters.
💡 SuggestionConsider adding more specific validation or documenting the exact filename format supported. The current pattern may be too permissive or too restrictive depending on use cases.
🟡 Warning src/slack/markdown.ts:296
extractFileSections doesn't handle edge cases where delimiters appear within code blocks or quoted strings. If AI output contains markdown code blocks with --- delimiters inside, they would be incorrectly parsed as file sections.
💡 SuggestionAdd logic to skip delimiters inside code blocks (```...```) or quoted strings. Consider using a proper markdown parser for more robust extraction.
🟡 Warning src/ai-review-service.ts:2488
When JSON recovery succeeds, trailing content is appended to the text field. However, if the JSON already has a text field, this concatenation happens silently without validation that the trailing content is actually related to the text field.
💡 SuggestionAdd validation to ensure trailing content is meaningful before appending. Consider logging a warning when appending recovered content to help debug issues.
🟡 Warning src/providers/ai-check-provider.ts:2071
evaluateAllowedToolsJs function uses compileAndRun from sandbox but doesn't validate the return type properly. It checks if result is an array but doesn't validate that array elements are strings before filtering.
💡 SuggestionAdd stronger type validation: ensure all array elements are non-empty strings before returning. This prevents runtime errors when invalid data is returned.
🟡 Warning src/providers/ai-check-provider.ts:2234
intersectAllowedTools has subtle logic: it checks for glob patterns ('*' or '!') but a single '!' entry would trigger glob passthrough even without '*'. This could lead to unexpected behavior where ['!search', 'bash'] passes through unchanged instead of being intersected.
💡 SuggestionClarify the glob detection logic. Consider explicitly checking for '*' presence rather than just '!' prefix, or document why '!' alone should trigger passthrough.
🟡 Warning src/frontends/slack-frontend.ts:602
Literal \n normalization (replace(/\\n/g, '\n')) happens before file section extraction. This global replacement could corrupt legitimate content that contains the literal string '\n' (e.g., code examples, regex patterns).
💡 SuggestionConsider more targeted replacement: only normalize \n within file section delimiters, or add a flag to control this behavior. Document that this transformation occurs.
🟡 Warning src/frontends/slack-frontend.ts:615
File upload errors are logged but don't fail the operation. If all file uploads fail, the message still posts with 'File upload failed' placeholders, but the user has no way to recover the file content.
💡 SuggestionConsider adding a fallback: if uploads fail, include the file content inline in the message or provide a way for users to retrieve it. Add metrics to track upload failure rates.
🟡 Warning src/tui/chat-tui.ts:488
When pendingLogs exceeds MAX_PENDING_LOGS, the oldest logs are discarded (slice(-MAX_PENDING_LOGS)). This silent data loss could hide important debug information.
💡 SuggestionLog a warning when logs are dropped. Consider increasing the limit or providing a way to access full logs via file output.
🟡 Warning src/tui/components/trace-viewer.ts:50
MAX_DISPLAY_SPANS (500) and MAX_TREE_DEPTH (10) are arbitrary limits without documentation. Users with legitimate needs beyond these limits will lose data.
💡 SuggestionDocument these limits and their rationale. Consider making them configurable or adding a warning when spans are truncated.
🟡 Warning tests/unit/probe-dsl-chunkbykey.test.ts:95
Test uses magic number assertions (toBeGreaterThanOrEqual(1), toBeLessThanOrEqual(3)) without explaining why these specific values are expected. The test doesn't clearly document what the correct behavior should be.
💡 SuggestionAdd comments explaining the expected grouping: 'Acme Corp (2 files), Beta Inc (2 files), Gamma LLC (1 file) = 3 chunks'. Use exact assertions when the expected value is known.
🟡 Warning tests/unit/probe-dsl-chunkbykey.test.ts:145
No tests for error cases: what happens when search returns empty string, when keyFn throws an error, or when chunks are malformed? The test suite only covers happy paths.
💡 SuggestionAdd negative tests: empty search results, invalid keyFn (null, undefined, throwing), malformed data structures. Test error handling paths.
🟡 Warning tests/unit/raw-output-extraction.test.ts:95
No tests for malformed RAW_OUTPUT blocks: missing closing delimiter, nested blocks, blocks with invalid characters, or blocks larger than memory limits.
💡 SuggestionAdd edge case tests: unclosed RAW_OUTPUT blocks, empty blocks, blocks with only whitespace, extremely large blocks, blocks with special characters that need escaping.
🟡 Warning tests/unit/slack-file-sections.test.ts:8
Test suite lacks negative test cases: what happens with malformed delimiters, filenames without extensions, extremely long filenames, or special characters in filenames?
💡 SuggestionAdd tests for: filenames with spaces, unicode characters, path traversal attempts (../), extremely long filenames (>255 chars), invalid characters, concurrent delimiters.
🟡 Warning tests/unit/slack-file-sections.test.ts:8
Test doesn't cover the case where file content itself contains delimiter patterns (e.g., a CSV file with '---' lines). This could cause false positives in extraction.
💡 SuggestionAdd test where file content contains lines that look like delimiters to verify they're not incorrectly parsed as new file sections.
🟡 Warning src/slack/client.ts:289
apiForm method doesn't validate that params only contains string values. If non-string values are passed, URLSearchParams will convert them using toString() which may produce unexpected results.
💡 SuggestionAdd runtime validation: ensure all param values are strings, or convert them explicitly. Throw a clear error if invalid types are passed.
🟡 Warning src/providers/workflow-tool-executor.ts:246
The _rawOutput wrapping logic assumes ProbeAgent will extract and handle <<<RAW_OUTPUT>>> blocks. This creates an implicit contract between Visor and ProbeAgent that isn't documented or versioned.
💡 SuggestionDocument this contract explicitly. Add a version check or capability detection to ensure ProbeAgent supports this feature. Fail gracefully if it doesn't.

Powered by Visor from Probelabs

Last updated: 2026-02-18T11:11:07.606Z | Triggered by: pr_updated | Commit: 2bf89a3

💡 TIP: You can chat with Visor using /visor ask <your question>

When execute_plan uses output(), the data must reach the end user
without any intermediate LLM rewriting it. This commit implements
the full passthrough chain:

- parseAIResponse: extract <<<RAW_OUTPUT>>> blocks before JSON parsing,
  attach as _rawOutput on the parsed output object
- workflow-check-provider: propagate _rawOutput from step outputs to
  workflow-level output
- workflow-tool-executor: wrap _rawOutput in <<<RAW_OUTPUT>>> delimiters
  when returning tool result, so the calling ProbeAgent extracts it
  via extractRawOutputBlocks (bypassing the LLM)
- slack-frontend: append _rawOutput to rendered text so file sections
  (--- filename.ext ---) get extracted and uploaded

Also updates @probelabs/probe to 0.6.0-rc245 which includes the
ProbeAgent-side change (appending output buffer with RAW_OUTPUT
delimiters for schema responses).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@buger buger merged commit 37efa7b into main Feb 18, 2026
14 of 16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments