feat: Add API authentication and input token validation#242
Draft
CloudWaddie wants to merge 5 commits intodecolua:masterfrom
Draft
feat: Add API authentication and input token validation#242CloudWaddie wants to merge 5 commits intodecolua:masterfrom
CloudWaddie wants to merge 5 commits intodecolua:masterfrom
Conversation
- Add API authentication middleware (src/lib/apiAuth.js) supporting JWT cookies and Bearer API keys - Add response sanitization utility (src/lib/sanitize.js) to mask/remove sensitive fields - Protect /api/usage/* endpoints (stats, history, logs, providers, chart, stream, request-details, request-logs, [connectionId]) - Protect /api/keys and /api/keys/[id] endpoints - Protect /api/shutdown endpoint - Protect /api/cloud/auth and /api/cloud/credentials/update endpoints - Sanitize usage stats responses to mask emails and API keys - All protected endpoints now return 401 for unauthenticated requests Fixes critical security vulnerability where sensitive data (emails, API keys, credentials) was exposed without authentication.
…providers - Add modelLimits.js with per-provider token limits (Claude 200K, GPT-4 128K, GLM 1M, etc.) - Add tokenEstimator.js to estimate input tokens from request bodies - Add validation in chatCore.js before forwarding to upstream providers - Reject requests exceeding model limits with clear 400 error message - Add modelLimits setting to localDb for user overrides - Add UI in profile settings to customize limits per provider
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR includes two security and reliability improvements:
Changes
API Authentication
Input Token Validation
open-sse/config/modelLimits.js): Default token limits per provider (Claude 200K, GPT-4 128K, GLM 1M, etc.)open-sse/utils/tokenEstimator.js): Estimates input tokens from request bodies (~4 chars per token)open-sse/handlers/chatCore.js): Validates input before executor call, returns 400 with clear error message if exceededsrc/lib/localDb.js): AddedmodelLimitssetting for user overridessrc/app/(dashboard)/dashboard/profile/page.js): Added "Model Input Limits" card in profile settingsHow Token Validation Works
On validation failure:
{ "error": { "message": "Input too long: ~250000 tokens (max: 200000). Please reduce your input or truncate the conversation.", "type": "invalid_request_error" } }Users can customize limits in Dashboard → Settings → "Model Input Limits" or leave empty to use defaults.