Skip to content

feat: Add API authentication and input token validation#242

Draft
CloudWaddie wants to merge 5 commits intodecolua:masterfrom
CloudWaddie:fix/api-authentication-security
Draft

feat: Add API authentication and input token validation#242
CloudWaddie wants to merge 5 commits intodecolua:masterfrom
CloudWaddie:fix/api-authentication-security

Conversation

@CloudWaddie
Copy link

@CloudWaddie CloudWaddie commented Mar 5, 2026

Summary

This PR includes two security and reliability improvements:

  1. API Authentication: Adds authentication to high-risk API endpoints to prevent data leaks
  2. Input Token Validation: Prevents "400: Input is too long" errors by validating input token count before forwarding to upstream providers

Changes

API Authentication

  • Added authentication checks to sensitive endpoints (usage stats, provider data, settings)
  • Prevents unauthorized access to user data

Input Token Validation

  • Model Limits Config (open-sse/config/modelLimits.js): Default token limits per provider (Claude 200K, GPT-4 128K, GLM 1M, etc.)
  • Token Estimator (open-sse/utils/tokenEstimator.js): Estimates input tokens from request bodies (~4 chars per token)
  • Validation in chatCore (open-sse/handlers/chatCore.js): Validates input before executor call, returns 400 with clear error message if exceeded
  • Settings Storage (src/lib/localDb.js): Added modelLimits setting for user overrides
  • UI Settings (src/app/(dashboard)/dashboard/profile/page.js): Added "Model Input Limits" card in profile settings

How Token Validation Works

Request → Estimate Tokens → Check Limit → [OK: Forward] or [Error: 400]

On validation failure:

{
  "error": {
    "message": "Input too long: ~250000 tokens (max: 200000). Please reduce your input or truncate the conversation.",
    "type": "invalid_request_error"
  }
}

Users can customize limits in Dashboard → Settings → "Model Input Limits" or leave empty to use defaults.

- Add API authentication middleware (src/lib/apiAuth.js) supporting JWT cookies and Bearer API keys
- Add response sanitization utility (src/lib/sanitize.js) to mask/remove sensitive fields
- Protect /api/usage/* endpoints (stats, history, logs, providers, chart, stream, request-details, request-logs, [connectionId])
- Protect /api/keys and /api/keys/[id] endpoints
- Protect /api/shutdown endpoint
- Protect /api/cloud/auth and /api/cloud/credentials/update endpoints
- Sanitize usage stats responses to mask emails and API keys
- All protected endpoints now return 401 for unauthenticated requests

Fixes critical security vulnerability where sensitive data (emails, API keys, credentials) was exposed without authentication.
…providers

- Add modelLimits.js with per-provider token limits (Claude 200K, GPT-4 128K, GLM 1M, etc.)
- Add tokenEstimator.js to estimate input tokens from request bodies
- Add validation in chatCore.js before forwarding to upstream providers
- Reject requests exceeding model limits with clear 400 error message
- Add modelLimits setting to localDb for user overrides
- Add UI in profile settings to customize limits per provider
@CloudWaddie CloudWaddie changed the title fix: Add authentication to API endpoints to prevent data leaks feat: Add API authentication and input token validation Mar 5, 2026
@CloudWaddie CloudWaddie marked this pull request as draft March 5, 2026 03:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant