feat: add GitHub Copilot OAuth provider UI#497
feat: add GitHub Copilot OAuth provider UI#497Joshf225 wants to merge 5 commits intospacedriveapp:mainfrom
Conversation
WalkthroughAdds end-to-end GitHub Copilot browser OAuth device-code flow: OpenAPI schema/types/client methods, backend device-code/token polling and credential persistence, LLM manager Copilot credential caching/selection, frontend Settings UI/dialogs/icons, and associated router wiring and minor component refactors. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ast-grep (0.42.1)src/api/server.rsThanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
interface/src/routes/Settings.tsx (1)
801-818:⚠️ Potential issue | 🟡 MinorExpose which Copilot credential source wins.
The backend now prefers
github-copilot-oauthover the staticgithub-copilotPAT, but these two cards can both render as configured with no precedence hint. Once both exist, editing the PAT card becomes a no-op from the user's perspective, which is hard to diagnose from this screen.Also applies to: 833-845
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/Settings.tsx` around lines 801 - 818, The UI shows both github-copilot-oauth and github-copilot PAT as "configured" with no precedence info; update the rendering logic around ProviderCard (where ProviderCard is instantiated and isConfigured(provider.id) is used, including the other occurrence at the 833-845 block) to compute and pass an explicit precedence flag (e.g., isActive or isOverridden) based on the backend preference (treat github-copilot-oauth as higher priority than github-copilot PAT), and surface that flag in the card props so edits to the lower-priority PAT become clearly a no-op (or disable edit/remove when overridden) and the card displays which credential wins; ensure setEditingProvider and related state (setKeyInput, setModelInput, setTestedSignature, setTestResult, setMessage) respect the precedence check so editing the overridden PAT either shows a warning or is blocked.
🧹 Nitpick comments (1)
src/github_copilot_oauth.rs (1)
217-220: Prefercredentialsovercredsin this public helper.
save_credentials(..., creds)introduces a new abbreviated name in a public module. Spelling it out keeps the API aligned withOAuthCredentialsand matches the repo's naming rule.As per coding guidelines, "Don't abbreviate variable names. Use
queuenotq,messagenotmsg,channelnotch. Common abbreviations likeconfigare fine."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/github_copilot_oauth.rs` around lines 217 - 220, Rename the abbreviated parameter `creds` to `credentials` in the public helper `save_credentials(instance_dir: &Path, creds: &OAuthCredentials) -> Result<()>` and update all references inside the function (e.g., the call to `serde_json::to_string_pretty(creds)` and any local uses) to use `credentials`; also update the function signature to `save_credentials(instance_dir: &Path, credentials: &OAuthCredentials) -> Result<()>` and adjust any internal variable names or doc comments in `src/github_copilot_oauth.rs` that reference `creds` so the public API and implementation consistently use the full `credentials` identifier.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/api/schema.d.ts`:
- Around line 5340-5355: The OpenAPI contract for list_workers currently marks
query.limit and query.offset as required but the client function workersList
serializes them only when truthy (default params = {}), causing omission of
offset=0 and allowing requests without pagination; fix by making limit and
offset optional in the schema (the list_workers.parameters.query entries for
limit and offset) or change the workersList implementation to default and always
include numeric pagination params (e.g., set default values and always serialize
limit and offset even when zero), then regenerate the types so list_workers and
the workersList caller are consistent.
In `@interface/src/routes/Settings.tsx`:
- Around line 589-617: handleStartCopilotOAuth currently passes result.state to
monitorCopilotOAuth but never persists it, so you must save the Copilot OAuth
state token (e.g., setCopilotOAuthState or copilotOAuthStateRef.current =
result.state) when the device code is created and then, in the dialog
cleanup/unmount effect where copilotOAuthAbortRef is aborted, also call the
backend delete-session endpoint with that stored state to cancel the server-side
session; ensure you clear the stored state and handle/delete-session errors, and
keep using copilotOAuthAbortRef and monitorCopilotOAuth as-is (refer to
handleStartCopilotOAuth, copilotOAuthAbortRef, monitorCopilotOAuth, and
copilotDeviceCodeInfo to locate where to store and clean up the state token).
In `@src/api/providers.rs`:
- Around line 866-904: finalize_copilot_oauth is non-transactional:
save_credentials and llm_manager.set_copilot_oauth_credentials are applied
before updating config.toml, leaving state half-configured if the TOML write
fails. Reorder or make atomic: update and persist the config (use content parse
-> apply_model_routing -> write to config_path) first or write config to a temp
file and atomically rename, and only after that call
crate::github_copilot_oauth::save_credentials(...) and
llm_manager.set_copilot_oauth_credentials(...); alternatively, if you must save
credentials first, ensure you can roll back by deleting the saved credentials
file on any subsequent failure before returning error. Ensure
refresh_defaults_config(state) and provider_setup_tx.try_send(...) run only
after all writes succeed.
- Around line 1375-1398: When removing "github-copilot-oauth" credentials also
cancel any pending device-code sessions so the background task
run_copilot_device_oauth_background cannot immediately re-write creds: after
deleting files and calling
manager.clear_copilot_oauth_credentials()/clear_copilot_token(), acquire and
clear or cancel entries in the COPILOT_DEVICE_OAUTH_SESSIONS store (or call the
existing cancellation helper if present) so any polling task sees no active
sessions; reference run_copilot_device_oauth_background,
COPILOT_DEVICE_OAUTH_SESSIONS, and the
manager.clear_copilot_oauth_credentials()/clear_copilot_token() calls to locate
where to insert the cancellation/clear logic.
In `@src/github_copilot_oauth.rs`:
- Around line 216-244: The save_credentials function's Unix branch uses
OpenOptionsExt::mode(0o600) which only affects newly created files and won't
re-tighten permissions on rewrites; after writing and syncing the file (the file
variable and credentials_path(instance_dir) identify the target), call
std::fs::set_permissions(&path, std::fs::Permissions::from_mode(0o600)) (inside
the #[cfg(unix)] block) and propagate/context any errors analogously to existing
with_context calls so the credentials file always ends up with 0o600 permissions
even when it existed beforehand.
In `@src/llm/manager.rs`:
- Around line 381-389: The current OAuth-first branch uses
get_copilot_oauth_token().await and if that path is chosen a failed/expired
OAuth exchange prevents ever trying the configured PAT; update the logic in the
method to treat OAuth failures as a fallback rather than a hard stop: call
get_copilot_oauth_token().await and if it returns Ok(Some(token)) use it, but if
it returns Err(_) or Ok(None) then attempt to load the configured PAT via
self.get_provider("github-copilot") (and use provider.api_key if non-empty);
only return None after both the OAuth attempt and the PAT lookup have failed;
reference get_copilot_oauth_token, get_provider, and the "github-copilot"
provider in the updated branch.
---
Outside diff comments:
In `@interface/src/routes/Settings.tsx`:
- Around line 801-818: The UI shows both github-copilot-oauth and github-copilot
PAT as "configured" with no precedence info; update the rendering logic around
ProviderCard (where ProviderCard is instantiated and isConfigured(provider.id)
is used, including the other occurrence at the 833-845 block) to compute and
pass an explicit precedence flag (e.g., isActive or isOverridden) based on the
backend preference (treat github-copilot-oauth as higher priority than
github-copilot PAT), and surface that flag in the card props so edits to the
lower-priority PAT become clearly a no-op (or disable edit/remove when
overridden) and the card displays which credential wins; ensure
setEditingProvider and related state (setKeyInput, setModelInput,
setTestedSignature, setTestResult, setMessage) respect the precedence check so
editing the overridden PAT either shows a warning or is blocked.
---
Nitpick comments:
In `@src/github_copilot_oauth.rs`:
- Around line 217-220: Rename the abbreviated parameter `creds` to `credentials`
in the public helper `save_credentials(instance_dir: &Path, creds:
&OAuthCredentials) -> Result<()>` and update all references inside the function
(e.g., the call to `serde_json::to_string_pretty(creds)` and any local uses) to
use `credentials`; also update the function signature to
`save_credentials(instance_dir: &Path, credentials: &OAuthCredentials) ->
Result<()>` and adjust any internal variable names or doc comments in
`src/github_copilot_oauth.rs` that reference `creds` so the public API and
implementation consistently use the full `credentials` identifier.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: f2a2dd53-5777-49ad-90b5-a87501cd18eb
📒 Files selected for processing (11)
interface/src/api/client.tsinterface/src/api/schema.d.tsinterface/src/api/types.tsinterface/src/components/ModelSelect.tsxinterface/src/lib/providerIcons.tsxinterface/src/routes/Settings.tsxsrc/api/providers.rssrc/api/server.rssrc/github_copilot_oauth.rssrc/lib.rssrc/llm/manager.rs
| const handleStartCopilotOAuth = async () => { | ||
| setCopilotOAuthMessage(null); | ||
| setCopilotDeviceCodeInfo(null); | ||
| setCopilotDeviceCodeCopied(false); | ||
| try { | ||
| const result = await startCopilotOAuthMutation.mutateAsync({ | ||
| model: COPILOT_OAUTH_DEFAULT_MODEL, | ||
| }); | ||
| if (!result.success || !result.user_code || !result.verification_url || !result.state) { | ||
| setCopilotOAuthMessage({ | ||
| text: result.message || "Failed to start device sign-in", | ||
| type: "error", | ||
| }); | ||
| return; | ||
| } | ||
|
|
||
| copilotOAuthAbortRef.current?.abort(); | ||
| const abort = new AbortController(); | ||
| copilotOAuthAbortRef.current = abort; | ||
|
|
||
| setCopilotDeviceCodeInfo({ | ||
| userCode: result.user_code, | ||
| verificationUrl: result.verification_url, | ||
| }); | ||
| void monitorCopilotOAuth(result.state, abort.signal); | ||
| } catch (error: any) { | ||
| setCopilotOAuthMessage({ text: `Failed: ${error.message}`, type: "error" }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l interface/src/routes/Settings.tsxRepository: spacedriveapp/spacebot
Length of output: 104
🏁 Script executed:
cat -n interface/src/routes/Settings.tsx | sed -n '589,620p'Repository: spacedriveapp/spacebot
Length of output: 1249
🏁 Script executed:
cat -n interface/src/routes/Settings.tsx | sed -n '636,655p'Repository: spacedriveapp/spacebot
Length of output: 782
🏁 Script executed:
cat -n interface/src/routes/Settings.tsx | sed -n '500,650p'Repository: spacedriveapp/spacebot
Length of output: 5568
🏁 Script executed:
rg "monitorCopilotOAuth|delete.*session|deleteSession" interface/src/routes/Settings.tsx -A 5 -B 2Repository: spacedriveapp/spacebot
Length of output: 541
🏁 Script executed:
rg "copilotOAuthState|result\.state" interface/src/routes/Settings.tsx -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 982
🏁 Script executed:
rg "copilotOAuthDialog" interface/src/routes/Settings.tsx -B 2 -A 5Repository: spacedriveapp/spacebot
Length of output: 1210
🏁 Script executed:
grep -n "useEffect" interface/src/routes/Settings.tsx | head -20Repository: spacedriveapp/spacebot
Length of output: 309
🏁 Script executed:
cat -n interface/src/routes/Settings.tsx | sed -n '1,100p'Repository: spacedriveapp/spacebot
Length of output: 3810
🏁 Script executed:
rg "const monitorCopilotOAuth|function monitorCopilotOAuth" interface/src/routes/ -A 15Repository: spacedriveapp/spacebot
Length of output: 1212
Store the Copilot OAuth state token and call a backend delete-session endpoint on dialog close.
Line 613 passes result.state to monitorCopilotOAuth() but does not store it in component state. Lines 636–651 only abort the local AbortController during cleanup, leaving no way to cancel the backend session. This means abandoned OAuth attempts persist server-side and may still complete after the user closes the dialog. Store the state token in a ref or state and call a delete-session endpoint in the cleanup effect.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/routes/Settings.tsx` around lines 589 - 617,
handleStartCopilotOAuth currently passes result.state to monitorCopilotOAuth but
never persists it, so you must save the Copilot OAuth state token (e.g.,
setCopilotOAuthState or copilotOAuthStateRef.current = result.state) when the
device code is created and then, in the dialog cleanup/unmount effect where
copilotOAuthAbortRef is aborted, also call the backend delete-session endpoint
with that stored state to cancel the server-side session; ensure you clear the
stored state and handle/delete-session errors, and keep using
copilotOAuthAbortRef and monitorCopilotOAuth as-is (refer to
handleStartCopilotOAuth, copilotOAuthAbortRef, monitorCopilotOAuth, and
copilotDeviceCodeInfo to locate where to store and clean up the state token).
| async fn finalize_copilot_oauth( | ||
| state: &Arc<ApiState>, | ||
| credentials: &crate::github_copilot_oauth::OAuthCredentials, | ||
| model: &str, | ||
| ) -> anyhow::Result<()> { | ||
| let instance_dir = (**state.instance_dir.load()).clone(); | ||
| crate::github_copilot_oauth::save_credentials(&instance_dir, credentials) | ||
| .context("failed to save GitHub Copilot OAuth credentials")?; | ||
|
|
||
| if let Some(llm_manager) = state.llm_manager.read().await.as_ref() { | ||
| llm_manager | ||
| .set_copilot_oauth_credentials(credentials.clone()) | ||
| .await; | ||
| } | ||
|
|
||
| let config_path = state.config_path.read().await.clone(); | ||
| let content = if config_path.exists() { | ||
| tokio::fs::read_to_string(&config_path) | ||
| .await | ||
| .context("failed to read config.toml")? | ||
| } else { | ||
| String::new() | ||
| }; | ||
|
|
||
| let mut doc: toml_edit::DocumentMut = content.parse().context("failed to parse config.toml")?; | ||
| apply_model_routing(&mut doc, model); | ||
| tokio::fs::write(&config_path, doc.to_string()) | ||
| .await | ||
| .context("failed to write config.toml")?; | ||
|
|
||
| refresh_defaults_config(state).await; | ||
|
|
||
| state | ||
| .provider_setup_tx | ||
| .try_send(crate::ProviderSetupEvent::ProvidersConfigured) | ||
| .ok(); | ||
|
|
||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Make Copilot OAuth finalization transactional.
save_credentials() at Line 872 and set_copilot_oauth_credentials() at Line 877 run before the TOML parse/write at Lines 890-894. If that config update fails, the flow returns failed but github_copilot_oauth.json is already live, so the provider is left half-configured and get_providers() will report it as installed on the next refresh.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/providers.rs` around lines 866 - 904, finalize_copilot_oauth is
non-transactional: save_credentials and
llm_manager.set_copilot_oauth_credentials are applied before updating
config.toml, leaving state half-configured if the TOML write fails. Reorder or
make atomic: update and persist the config (use content parse ->
apply_model_routing -> write to config_path) first or write config to a temp
file and atomically rename, and only after that call
crate::github_copilot_oauth::save_credentials(...) and
llm_manager.set_copilot_oauth_credentials(...); alternatively, if you must save
credentials first, ensure you can roll back by deleting the saved credentials
file on any subsequent failure before returning error. Ensure
refresh_defaults_config(state) and provider_setup_tx.try_send(...) run only
after all writes succeed.
| if provider == "github-copilot-oauth" { | ||
| let instance_dir = (**state.instance_dir.load()).clone(); | ||
| let cred_path = crate::github_copilot_oauth::credentials_path(&instance_dir); | ||
| if cred_path.exists() { | ||
| tokio::fs::remove_file(&cred_path) | ||
| .await | ||
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | ||
| } | ||
| // Also clear the cached Copilot API token since it was derived from OAuth. | ||
| let token_path = crate::github_copilot_auth::credentials_path(&instance_dir); | ||
| if token_path.exists() { | ||
| tokio::fs::remove_file(&token_path) | ||
| .await | ||
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | ||
| } | ||
| if let Some(manager) = state.llm_manager.read().await.as_ref() { | ||
| manager.clear_copilot_oauth_credentials().await; | ||
| manager.clear_copilot_token().await; | ||
| } | ||
| return Ok(Json(ProviderUpdateResponse { | ||
| success: true, | ||
| message: "GitHub Copilot OAuth credentials removed".into(), | ||
| })); | ||
| } |
There was a problem hiding this comment.
Cancel pending device sessions when removing Copilot OAuth.
This branch deletes the files and clears memory, but any already spawned run_copilot_device_oauth_background() task keeps polling COPILOT_DEVICE_OAUTH_SESSIONS and can write the credentials back immediately after the user removes the provider.
Suggested fix
if provider == "github-copilot-oauth" {
+ {
+ let mut sessions = COPILOT_DEVICE_OAUTH_SESSIONS.write().await;
+ for session in sessions.values_mut() {
+ if session.status.is_pending() {
+ session.status = DeviceOAuthSessionStatus::Failed(
+ "Sign-in cancelled by user.".to_string(),
+ );
+ }
+ }
+ }
let instance_dir = (**state.instance_dir.load()).clone();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if provider == "github-copilot-oauth" { | |
| let instance_dir = (**state.instance_dir.load()).clone(); | |
| let cred_path = crate::github_copilot_oauth::credentials_path(&instance_dir); | |
| if cred_path.exists() { | |
| tokio::fs::remove_file(&cred_path) | |
| .await | |
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | |
| } | |
| // Also clear the cached Copilot API token since it was derived from OAuth. | |
| let token_path = crate::github_copilot_auth::credentials_path(&instance_dir); | |
| if token_path.exists() { | |
| tokio::fs::remove_file(&token_path) | |
| .await | |
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | |
| } | |
| if let Some(manager) = state.llm_manager.read().await.as_ref() { | |
| manager.clear_copilot_oauth_credentials().await; | |
| manager.clear_copilot_token().await; | |
| } | |
| return Ok(Json(ProviderUpdateResponse { | |
| success: true, | |
| message: "GitHub Copilot OAuth credentials removed".into(), | |
| })); | |
| } | |
| if provider == "github-copilot-oauth" { | |
| { | |
| let mut sessions = COPILOT_DEVICE_OAUTH_SESSIONS.write().await; | |
| for session in sessions.values_mut() { | |
| if session.status.is_pending() { | |
| session.status = DeviceOAuthSessionStatus::Failed( | |
| "Sign-in cancelled by user.".to_string(), | |
| ); | |
| } | |
| } | |
| } | |
| let instance_dir = (**state.instance_dir.load()).clone(); | |
| let cred_path = crate::github_copilot_oauth::credentials_path(&instance_dir); | |
| if cred_path.exists() { | |
| tokio::fs::remove_file(&cred_path) | |
| .await | |
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | |
| } | |
| // Also clear the cached Copilot API token since it was derived from OAuth. | |
| let token_path = crate::github_copilot_auth::credentials_path(&instance_dir); | |
| if token_path.exists() { | |
| tokio::fs::remove_file(&token_path) | |
| .await | |
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | |
| } | |
| if let Some(manager) = state.llm_manager.read().await.as_ref() { | |
| manager.clear_copilot_oauth_credentials().await; | |
| manager.clear_copilot_token().await; | |
| } | |
| return Ok(Json(ProviderUpdateResponse { | |
| success: true, | |
| message: "GitHub Copilot OAuth credentials removed".into(), | |
| })); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/api/providers.rs` around lines 1375 - 1398, When removing
"github-copilot-oauth" credentials also cancel any pending device-code sessions
so the background task run_copilot_device_oauth_background cannot immediately
re-write creds: after deleting files and calling
manager.clear_copilot_oauth_credentials()/clear_copilot_token(), acquire and
clear or cancel entries in the COPILOT_DEVICE_OAUTH_SESSIONS store (or call the
existing cancellation helper if present) so any polling task sees no active
sessions; reference run_copilot_device_oauth_background,
COPILOT_DEVICE_OAUTH_SESSIONS, and the
manager.clear_copilot_oauth_credentials()/clear_copilot_token() calls to locate
where to insert the cancellation/clear logic.
| /// Save GitHub Copilot OAuth credentials to disk with restricted permissions (0600). | ||
| pub fn save_credentials(instance_dir: &Path, creds: &OAuthCredentials) -> Result<()> { | ||
| let path = credentials_path(instance_dir); | ||
| let data = serde_json::to_string_pretty(creds) | ||
| .context("failed to serialize GitHub Copilot OAuth credentials")?; | ||
|
|
||
| #[cfg(unix)] | ||
| { | ||
| use std::fs::OpenOptions; | ||
| use std::io::Write; | ||
| use std::os::unix::fs::OpenOptionsExt; | ||
|
|
||
| let mut file = OpenOptions::new() | ||
| .create(true) | ||
| .truncate(true) | ||
| .write(true) | ||
| .mode(0o600) | ||
| .open(&path) | ||
| .with_context(|| { | ||
| format!( | ||
| "failed to create {} with restricted permissions", | ||
| path.display() | ||
| ) | ||
| })?; | ||
| file.write_all(data.as_bytes()) | ||
| .with_context(|| format!("failed to write {}", path.display()))?; | ||
| file.sync_all() | ||
| .with_context(|| format!("failed to sync {}", path.display()))?; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the actual file to see the complete context
cat -n src/github_copilot_oauth.rs | head -260 | tail -80Repository: spacedriveapp/spacebot
Length of output: 3056
🏁 Script executed:
# Let's also check if there are any tests or related code dealing with credentials
rg -A 5 -B 5 "save_credentials" src/Repository: spacedriveapp/spacebot
Length of output: 9245
🏁 Script executed:
# Check if there's any chmod or permission-related code elsewhere that might handle this
rg "chmod|0o600|0o644|set_permissions" src/Repository: spacedriveapp/spacebot
Length of output: 1097
🏁 Script executed:
# Check openai_auth.rs save_credentials function
sed -n '145,165p' src/openai_auth.rsRepository: spacedriveapp/spacebot
Length of output: 837
🏁 Script executed:
# Check auth.rs save_credentials function - seems to also use set_permissions
sed -n '50,70p' src/auth.rsRepository: spacedriveapp/spacebot
Length of output: 689
🏁 Script executed:
# Look at the github_copilot_auth.rs that was mentioned in the search
cat -n src/github_copilot_auth.rs | sed -n '1,50p'Repository: spacedriveapp/spacebot
Length of output: 2313
🏁 Script executed:
# Find the exact location and content of set_permissions calls in openai_auth.rs
rg -A 10 -B 5 "save_credentials" src/openai_auth.rs | head -40Repository: spacedriveapp/spacebot
Length of output: 758
🏁 Script executed:
# Find the exact location and content of set_permissions calls in auth.rs
rg -A 10 -B 5 "save_credentials" src/auth.rs | head -40Repository: spacedriveapp/spacebot
Length of output: 1151
🏁 Script executed:
# Let's also check github_copilot_auth.rs more thoroughly
wc -l src/github_copilot_auth.rsRepository: spacedriveapp/spacebot
Length of output: 96
🏁 Script executed:
# Check the github_copilot_auth.rs to see if it has similar save logic
rg -A 15 -B 5 "save" src/github_copilot_auth.rs | head -50Repository: spacedriveapp/spacebot
Length of output: 860
🏁 Script executed:
# Let's also verify the UNIX semantics - check if there's any related documentation or comments about this
rg -B 5 -A 5 "truncate.*mode|mode.*truncate" src/Repository: spacedriveapp/spacebot
Length of output: 48
mode(0o600) does not re-tighten permissions on file rewrites.
On Unix, OpenOptionsExt::mode() only sets permissions during file creation. When the credentials file already exists and is rewritten with truncate(true), the existing permissions are preserved. If the file was initially created with broader permissions (or by a different process), subsequent rewrites will keep those permissions, leaving the OAuth token readable by unintended users.
The correct approach, used in openai_auth.rs and auth.rs, is to explicitly call std::fs::set_permissions() after writing to ensure permissions are always 0o600.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/github_copilot_oauth.rs` around lines 216 - 244, The save_credentials
function's Unix branch uses OpenOptionsExt::mode(0o600) which only affects newly
created files and won't re-tighten permissions on rewrites; after writing and
syncing the file (the file variable and credentials_path(instance_dir) identify
the target), call std::fs::set_permissions(&path,
std::fs::Permissions::from_mode(0o600)) (inside the #[cfg(unix)] block) and
propagate/context any errors analogously to existing with_context calls so the
credentials file always ends up with 0o600 permissions even when it existed
beforehand.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/llm/model.rs (1)
2900-3007:⚠️ Potential issue | 🟠 MajorKey pending function calls by
output_indexoritem_idto handle interleaved streaming.The parser currently maintains only a single
current_tool_call, so a secondresponse.output_item.addedwill overwrite the first before its deltas or done event arrive. When multiple function calls stream concurrently, this causes arguments to merge or drop. Useoutput_index(oritem["id"]) to track pending calls in a map, and add a test exercising two interleaved function calls to prevent regression.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 2900 - 3007, The parser currently uses a single current_tool_call which gets overwritten when multiple concurrent function calls stream; change current_tool_call: Option<serde_json::Value> to a map (e.g., HashMap<String, serde_json::Value>) keyed by item["id"] or event_body["output_index"] and update handlers for "response.output_item.added", "response.function_call_arguments.delta", and "response.output_item.done" to lookup the pending call by id/output_index, append deltas to that entry's "arguments" string, and move the completed entry into accumulated_tool_calls on done (ensuring you still check duplicates before pushing). Update reconstruction logic that iterates accumulated_tool_calls accordingly and add a unit/integration test that streams two interleaved function calls to validate arguments stay separate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llm/model.rs`:
- Around line 2864-2869: The tracing::warn call in src/llm/model.rs logs the
full Responses payload via raw_body = %body; change this to avoid persisting
sensitive content by logging only metadata or a truncated/redacted sample:
replace raw_body = %body with either the body length (e.g., body.len()), a
truncated snippet (first N characters) or a redacted sample (mask tool
args/assistant text) and keep existing fields provider_label, output_items, and
output_types; update the tracing::warn invocation so it emits safe_body or
body_len instead of the full body.
- Around line 2856-2875: The code currently treats an empty/unsupported
Responses payload as success by returning
OneOrMany::one(AssistantContent::Text("")) from the match over
OneOrMany::many(assistant_content); change that to surface an error (or a clear
sentinel the caller checks) so attempt_with_retries() will retry/fallback —
e.g., return a Result::Err with a descriptive error (e.g., EmptyResponse or
parsing error) from this branch instead of constructing
AssistantContent::Text(""), and apply the same change to the similar block
around the 3618-3657 region; reference OneOrMany::many, AssistantContent::Text /
Text, and attempt_with_retries() when making the change.
- Around line 2638-2656: The loop in collect_openai_text_content currently
recurses into every unknown map key and should be narrowed to only known wrapper
fields (e.g., "payload", specifically "payload.value") instead of blanket
recursion; modify the logic in collect_openai_text_content so that after
skipping the explicit safe keys
("type","id","call_id","name","arguments","status","role","text","summary","refusal","content")
it does NOT call collect_openai_text_content on every remaining key, but only
inspects/recurses into explicitly allowed wrappers like "payload" -> "value"
(and any other vetted wrapper fields your tests require); apply the same
tightening to the analogous code paths referenced (around the other occurrences
you noted) so nested web-search, files, images, or logprobs are not exposed as
text.
---
Outside diff comments:
In `@src/llm/model.rs`:
- Around line 2900-3007: The parser currently uses a single current_tool_call
which gets overwritten when multiple concurrent function calls stream; change
current_tool_call: Option<serde_json::Value> to a map (e.g., HashMap<String,
serde_json::Value>) keyed by item["id"] or event_body["output_index"] and update
handlers for "response.output_item.added",
"response.function_call_arguments.delta", and "response.output_item.done" to
lookup the pending call by id/output_index, append deltas to that entry's
"arguments" string, and move the completed entry into accumulated_tool_calls on
done (ensuring you still check duplicates before pushing). Update reconstruction
logic that iterates accumulated_tool_calls accordingly and add a
unit/integration test that streams two interleaved function calls to validate
arguments stay separate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 62cb7bf7-8293-4fc9-9565-86c8823b2f8d
📒 Files selected for processing (3)
interface/src/api/client.tssrc/github_copilot_oauth.rssrc/llm/model.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- interface/src/api/client.ts
| for (key, nested_value) in map { | ||
| if matches!( | ||
| key.as_str(), | ||
| "type" | ||
| | "id" | ||
| | "call_id" | ||
| | "name" | ||
| | "arguments" | ||
| | "status" | ||
| | "role" | ||
| | "text" | ||
| | "summary" | ||
| | "refusal" | ||
| | "content" | ||
| ) { | ||
| continue; | ||
| } | ||
| collect_openai_text_content(nested_value, text_parts); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the code at the specified line ranges
sed -n '2638,2656p' src/llm/model.rs | cat -nRepository: spacedriveapp/spacebot
Length of output: 833
🏁 Script executed:
# Also check the other mentioned line ranges
sed -n '2737,2755p' src/llm/model.rs | cat -nRepository: spacedriveapp/spacebot
Length of output: 833
🏁 Script executed:
# And the third range
sed -n '3553,3587p' src/llm/model.rs | cat -nRepository: spacedriveapp/spacebot
Length of output: 1515
🏁 Script executed:
# Find and examine the collect_openai_text_content function definition
rg -n "fn collect_openai_text_content" src/llm/model.rs -A 20Repository: spacedriveapp/spacebot
Length of output: 1098
🏁 Script executed:
# Also search for extract_text_content_from_responses_output_item
rg -n "fn extract_text_content_from_responses_output_item" src/llm/model.rs -A 20Repository: spacedriveapp/spacebot
Length of output: 889
🏁 Script executed:
# Get complete collect_openai_text_content function
sed -n '2602,2700p' src/llm/model.rs | cat -nRepository: spacedriveapp/spacebot
Length of output: 4263
🏁 Script executed:
# Search for any TODO/FIXME comments near these line ranges
rg -n "TODO|FIXME|XXX" src/llm/model.rs | grep -E "(26[0-9]{2}|27[0-9]{2}|35[0-9]{2})"Repository: spacedriveapp/spacebot
Length of output: 48
🏁 Script executed:
# Check where these functions are called from
rg -n "collect_openai_text_content|extract_text_content_from_responses_output_item" src/llm/model.rs | head -30Repository: spacedriveapp/spacebot
Length of output: 2136
🏁 Script executed:
# Check context around the test to understand intent
sed -n '3545,3595p' src/llm/model.rs | cat -nRepository: spacedriveapp/spacebot
Length of output: 2148
🏁 Script executed:
# Search for OpenAI Responses API documentation references
rg -n "platform.openai.com|Responses API" src/llm/model.rs -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 635
🏁 Script executed:
# Check if there are any filtering patterns for actual metadata fields
rg -n "logprobs|code.*interpreter|web.*search|file.*search" src/llm/model.rsRepository: spacedriveapp/spacebot
Length of output: 48
Narrow the fallback to only specific wrapper fields.
The current implementation recursively processes every unknown nested key, which works for payload.value (validated by your test), but is too permissive. When the OpenAI Responses API introduces web-search sources, code-interpreter outputs, file-search results, image URLs, or logprobs in nested payloads, this will surface them as user-visible text. Instead of a blanket recursion into unknown keys, explicitly handle only the wrapper fields you actually want to expose (e.g., payload.value). This matches the intentional approach you're already taking elsewhere (like reasoning content flattening).
Also applies to: lines 2737-2755, 3553-3587
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/model.rs` around lines 2638 - 2656, The loop in
collect_openai_text_content currently recurses into every unknown map key and
should be narrowed to only known wrapper fields (e.g., "payload", specifically
"payload.value") instead of blanket recursion; modify the logic in
collect_openai_text_content so that after skipping the explicit safe keys
("type","id","call_id","name","arguments","status","role","text","summary","refusal","content")
it does NOT call collect_openai_text_content on every remaining key, but only
inspects/recurses into explicitly allowed wrappers like "payload" -> "value"
(and any other vetted wrapper fields your tests require); apply the same
tightening to the analogous code paths referenced (around the other occurrences
you noted) so nested web-search, files, images, or logprobs are not exposed as
text.
| let choice = match OneOrMany::many(assistant_content) { | ||
| Ok(choice) => choice, | ||
| Err(_) => { | ||
| let output_types = output_items | ||
| .iter() | ||
| .map(|item| item["type"].as_str().unwrap_or("<missing-type>")) | ||
| .collect::<Vec<_>>() | ||
| .join(", "); | ||
| tracing::warn!( | ||
| provider = %provider_label, | ||
| output_items = output_items.len(), | ||
| output_types = %output_types, | ||
| raw_body = %body, | ||
| "empty response from responses API — returning empty text to allow retry" | ||
| ); | ||
| OneOrMany::one(AssistantContent::Text(Text { | ||
| text: String::new(), | ||
| })) | ||
| } | ||
| }; |
There was a problem hiding this comment.
Empty output is still a success to the retry path.
Inside this file, attempt_with_retries() only retries on Err. Returning Ok(AssistantContent::Text("")) here makes unsupported/empty Responses payloads look successful, so the fallback chain never runs, and the new tests lock that behavior in. If empty output is supposed to trigger retry/fallback, this branch still needs to surface an error or a sentinel the caller checks before returning success.
Also applies to: 3618-3657
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/model.rs` around lines 2856 - 2875, The code currently treats an
empty/unsupported Responses payload as success by returning
OneOrMany::one(AssistantContent::Text("")) from the match over
OneOrMany::many(assistant_content); change that to surface an error (or a clear
sentinel the caller checks) so attempt_with_retries() will retry/fallback —
e.g., return a Result::Err with a descriptive error (e.g., EmptyResponse or
parsing error) from this branch instead of constructing
AssistantContent::Text(""), and apply the same change to the similar block
around the 3618-3657 region; reference OneOrMany::many, AssistantContent::Text /
Text, and attempt_with_retries() when making the change.
| tracing::warn!( | ||
| provider = %provider_label, | ||
| output_items = output_items.len(), | ||
| output_types = %output_types, | ||
| raw_body = %body, | ||
| "empty response from responses API — returning empty text to allow retry" |
There was a problem hiding this comment.
Avoid logging the full provider body at warn level.
raw_body = %body will persist the entire Responses payload—including assistant text and tool arguments—whenever this path is hit. Please log shape metadata or a truncated/redacted sample instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/model.rs` around lines 2864 - 2869, The tracing::warn call in
src/llm/model.rs logs the full Responses payload via raw_body = %body; change
this to avoid persisting sensitive content by logging only metadata or a
truncated/redacted sample: replace raw_body = %body with either the body length
(e.g., body.len()), a truncated snippet (first N characters) or a redacted
sample (mask tool args/assistant text) and keep existing fields provider_label,
output_items, and output_types; update the tracing::warn invocation so it emits
safe_body or body_len instead of the full body.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@interface/src/components/Markdown.tsx`:
- Line 9: The Markdown component currently uses rehypeRaw in the rehypePlugins
array which allows raw HTML from untrusted sources; update the rehypePlugins in
Markdown.tsx to include rehypeSanitize immediately after rehypeRaw (importing
rehypeSanitize as needed) so raw HTML is parsed but unsafe elements/attributes
are stripped—ensure the import for rehypeRaw remains and add the rehypeSanitize
import and include it in the rehypePlugins array.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 8babd501-9de3-4c28-8b49-dc19e92df160
📒 Files selected for processing (3)
.gitignoreinterface/src/components/Markdown.tsxinterface/src/components/WebChatPanel.tsx
✅ Files skipped from review due to trivial changes (2)
- .gitignore
- interface/src/components/WebChatPanel.tsx
| // Stable module-level references so ReactMarkdown never re-renders due to | ||
| // new array/object identities on every call. | ||
| const remarkPlugins = [remarkGfm]; | ||
| const rehypePlugins = [rehypeRaw]; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For react-markdown 10.1.0, what is the recommended secure setup when using rehype-raw, and is rehype-sanitize required to prevent XSS from untrusted markdown?
💡 Result:
For react-markdown 10.1.0, the recommended secure posture is:
- Default (recommended for untrusted markdown): do not use
rehype-raw. React-markdown “typically escapes HTML … because it is dangerous” and is “secure by default” as long as you don’t introduce unsafe plugins / components /urlTransformoverrides. (github.com) - If you enable HTML with
rehype-raw:- Treat it as trusted-only input or sanitize.
rehype-rawexplicitly warns that improper use can enable XSS and says to either not combine it with user content or userehype-sanitize. (app.unpkg.com)
Recommended secure setup when using rehype-raw
Use rehype-sanitize after rehype-raw (and after any other “unsafe” steps), and customize the schema as needed:
import Markdown from 'react-markdown'
import rehypeRaw from 'rehype-raw'
import rehypeSanitize from 'rehype-sanitize'
<Markdown rehypePlugins={[rehypeRaw, rehypeSanitize]}>
{markdown}
</Markdown>React-markdown’s own security guidance: “To make sure the content is completely safe, even after what plugins do, use rehype-sanitize.” (github.com)
rehype-sanitize drops anything not explicitly allowed (defaults to a GitHub-like schema) and recommends sanitizing whenever you don’t completely trust authors/plugins. (github.com)
Is rehype-sanitize required to prevent XSS from untrusted markdown?
- If you do not use
rehype-raw(default behavior): not strictly required for “raw HTML XSS,” because HTML is escaped/ignored by default; the main risk becomes what you add (plugins/components/URL handling). (github.com) - If you do use
rehype-rawwith untrusted markdown: you should treatrehype-sanitize(or equivalent sanitization) as required to prevent XSS. (app.unpkg.com)
Citations:
- 1: https://github.com/remarkjs/react-markdown
- 2: https://app.unpkg.com/rehype-raw%404.0.2/files/readme.md
- 3: https://github.com/remarkjs/react-markdown
- 4: https://github.com/rehypejs/rehype-sanitize
- 5: https://github.com/remarkjs/react-markdown
- 6: https://app.unpkg.com/rehype-raw%404.0.2/files/readme.md
🏁 Script executed:
cd interface && head -30 src/components/Markdown.tsxRepository: spacedriveapp/spacebot
Length of output: 932
🏁 Script executed:
# Check for package.json to see react-markdown version
fd -t f package.json | head -5Repository: spacedriveapp/spacebot
Length of output: 127
🏁 Script executed:
# Search for usages of the Markdown component to understand data sources
rg "Markdown\s*[<({]" src --type tsx --type ts -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 93
🏁 Script executed:
# Check if any sanitization is already in place elsewhere
rg -i "sanitize|xss|escape" src/components/Markdown.tsxRepository: spacedriveapp/spacebot
Length of output: 133
🏁 Script executed:
cat -n interface/src/components/Markdown.tsxRepository: spacedriveapp/spacebot
Length of output: 1241
🏁 Script executed:
cat interface/package.json | grep -A 2 -B 2 react-markdownRepository: spacedriveapp/spacebot
Length of output: 204
🏁 Script executed:
rg "Markdown" interface/src --type-add 'tsx:*.tsx' -t tsx -B 1 -A 1 | head -40Repository: spacedriveapp/spacebot
Length of output: 2559
🏁 Script executed:
# Search for where Markdown component receives content (item.conclusion, item.result, release.body)
rg "conclusion|release\.body|item\.result" interface/src --type-add 'tsx:*.tsx' -t tsx -B 2 -A 2Repository: spacedriveapp/spacebot
Length of output: 2987
Add rehype-sanitize to prevent XSS from untrusted markdown sources.
Line 9 enables rehypeRaw without sanitization. The Markdown component processes untrusted external content (e.g., release.body from GitHub API in Settings.tsx) and AI-generated output (e.g., item.conclusion, item.result in ChannelDetail.tsx). Without sanitization, malicious HTML in these sources can execute arbitrary scripts. Add rehype-sanitize after rehypeRaw to safely parse raw HTML while blocking dangerous elements.
Fix
import rehypeRaw from "rehype-raw";
+import rehypeSanitize from "rehype-sanitize";
const remarkPlugins = [remarkGfm];
-const rehypePlugins = [rehypeRaw];
+const rehypePlugins = [rehypeRaw, rehypeSanitize];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@interface/src/components/Markdown.tsx` at line 9, The Markdown component
currently uses rehypeRaw in the rehypePlugins array which allows raw HTML from
untrusted sources; update the rehypePlugins in Markdown.tsx to include
rehypeSanitize immediately after rehypeRaw (importing rehypeSanitize as needed)
so raw HTML is parsed but unsafe elements/attributes are stripped—ensure the
import for rehypeRaw remains and add the rehypeSanitize import and include it in
the rehypePlugins array.
Adds browser-based OAuth authentication for GitHub Copilot as an alternative to manually providing a PAT. Uses GitHub's OAuth 2.0 Device Authorization Grant (RFC 8628) with the same client ID as OpenCode. - New module github_copilot_oauth: device code request, background token polling, credential persistence with 0600 permissions - LlmManager: prefer OAuth credentials over static PAT when both exist, lazy load from disk on init - API: start/status/delete endpoints for Copilot OAuth sessions, background poller with configurable interval, ProviderStatus updated with github_copilot_oauth field
…ange The previous client ID (Ov23li8tweQw6odWQebz) was not whitelisted by GitHub for the copilot_internal/v2/token endpoint, causing 404s on every token exchange. Switch to 01ab8ac9400c4e429b23 (the VS Code GitHub Copilot extension app), which GitHub has whitelisted. This is consistent with how neovim/copilot.vim and other third-party Copilot integrations handle the same constraint. Also fix three pre-existing collapsible_if clippy warnings in model.rs by collapsing nested if-let/if blocks using && as clippy suggests.
- Wrap Markdown component with React.memo to skip re-renders when props are unchanged - Hoist remarkPlugins/rehypePlugins/components to module-level stable refs so referential equality holds across renders - Move input state into FloatingChatInput so typing no longer triggers a re-render of the full timeline - Replace useEffect([value]) reflow pattern with a single on-mount native input event listener Also add .spacebot-dev/ to .gitignore to keep dev-instance config out of version control.
e32ff84 to
229c96e
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/llm/model.rs (1)
3612-3657:⚠️ Potential issue | 🟠 MajorTrack reconstructed tool calls by
item_id.The SSE fallback parser keeps only one
current_tool_call, butresponse.function_call_arguments.deltaevents are keyed byitem_id. If two tool calls interleave, their argument deltas will be merged into whichever call was seen last, and anyresponse.output_item.doneflushes that slot regardless of item type. This should mirror theHashMap<String, OpenAiStreamingToolCall>approach already used inprocess_openai_responses_stream_raw_event().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 3612 - 3657, The SSE fallback parser currently uses a single mutable current_tool_call which causes interleaved tool-call argument deltas (response.function_call_arguments.delta) to be merged incorrectly; replace this with a map keyed by item_id (e.g., HashMap<String, serde_json::Value>) so each in-progress tool call is tracked separately, update logic in the loop handling response.output_item.added to insert item.clone() into the map by its item_id, modify response.function_call_arguments.delta handling to look up the correct entry by item_id and append the delta to that entry's "arguments", and on response.output_item.done remove the specific entry by item_id and push it into accumulated_tool_calls (mirroring the HashMap<String, OpenAiStreamingToolCall> approach used in process_openai_responses_stream_raw_event()).
♻️ Duplicate comments (7)
interface/src/routes/Settings.tsx (1)
673-701:⚠️ Potential issue | 🟠 MajorPersist the Copilot OAuth state token so dialog close can cancel the backend flow.
handleStartCopilotOAuth()passesresult.statestraight intomonitorCopilotOAuth()and then drops it. When the dialog closes, the effect only aborts the local poller, so the server-side session keeps running and may still complete after the user abandoned the flow. Store the state token and use it from the cleanup path to cancel the backend session.Also applies to: 720-735
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/routes/Settings.tsx` around lines 673 - 701, handleStartCopilotOAuth currently passes result.state only into monitorCopilotOAuth and doesn't persist it, so closing the dialog can't cancel the server-side session; update handleStartCopilotOAuth to save result.state (e.g., via a new state or ref like copilotOAuthStateRef or setCopilotOAuthState) when setting copilot device info, and modify the dialog cleanup/close path (the same place that aborts copilotOAuthAbortRef) to read that persisted state and call the backend cancellation endpoint (or trigger the existing cancellation mutation) with that state before/after aborting the local poller; ensure monitorCopilotOAuth still receives the state for polling but the persisted token is available for the cleanup handler to cancel the server session.src/api/providers.rs (2)
891-928:⚠️ Potential issue | 🟠 MajorFinalize GitHub Copilot OAuth atomically.
finalize_copilot_oauth()savesgithub_copilot_oauth.jsonand updatesLlmManagerbefore theconfig.tomlread/parse/write succeeds. If that later write fails,get_providers()at Lines 415-416 will still reportgithub_copilot_oauthas configured on the next refresh, leaving the provider half-installed. Reorder the writes or roll back the saved credentials and in-memory cache on failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/providers.rs` around lines 891 - 928, finalize_copilot_oauth currently persists github_copilot_oauth.json via crate::github_copilot_oauth::save_credentials and updates the in-memory LlmManager via llm_manager.set_copilot_oauth_credentials before updating config.toml, which can leave the provider half-installed if the config write fails; change the flow so the config.toml update (reading/parsing, apply_model_routing, writing) succeeds first and only then call save_credentials and set_copilot_oauth_credentials, or alternatively perform save_credentials and set_copilot_oauth_credentials but on any subsequent error roll them back (remove saved file via crate::github_copilot_oauth::delete or equivalent and clear LlmManager credentials via set_copilot_oauth_credentials(None) or a clear method); ensure provider_setup_tx is only sent after all steps succeed and call refresh_defaults_config only after successful commit.
1856-1881:⚠️ Potential issue | 🟠 MajorCancel pending Copilot device sessions when removing the provider.
This branch removes the files and clears
LlmManager, but it never clearsCOPILOT_DEVICE_OAUTH_SESSIONS. Any already spawnedrun_copilot_device_oauth_background()task can still finish and write the credentials right back after the user clicks Remove.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/providers.rs` around lines 1856 - 1881, The removal branch for "github-copilot-oauth" must also cancel any in-flight Copilot device OAuth sessions so a concurrently running run_copilot_device_oauth_background() task cannot recreate credentials after removal; add logic to clear or cancel COPILOT_DEVICE_OAUTH_SESSIONS (or call a new cancel_copilot_device_oauth_sessions() helper) and, if you track a background task handle, abort or signal it to stop before deleting files and returning. Do this alongside the existing manager.clear_copilot_oauth_credentials()/clear_copilot_token() calls so sessions are invalidated atomically and no background task can write credentials back after removal.src/github_copilot_oauth.rs (1)
227-249:⚠️ Potential issue | 🟠 MajorRe-apply
0600after rewrites.On Unix,
OpenOptionsExt::mode(0o600)only affects file creation. Ifgithub_copilot_oauth.jsonalready exists, the truncate/write path preserves its current permissions, unlikesrc/openai_auth.rsandsrc/auth.rswhich callset_permissions()after writing. Add an explicitset_permissions(...0o600)aftersync_all().#!/bin/bash set -euo pipefail for file in src/github_copilot_oauth.rs src/openai_auth.rs src/auth.rs; do if [[ -f "$file" ]]; then echo "== $file ==" rg -n -C3 'save_credentials|set_permissions|Permissions::from_mode|\.mode\(0o600\)' "$file" echo fi done🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/github_copilot_oauth.rs` around lines 227 - 249, The Unix write block uses OpenOptionsExt::mode(0o600) which only applies on create, so after truncating/writing/sync_all() you must explicitly reapply restrictive permissions; add a call to std::fs::set_permissions(&path, std::fs::Permissions::from_mode(0o600)) (using std::os::unix::fs::PermissionsExt::from_mode) immediately after file.sync_all() to enforce 0o600 for existing files.src/llm/model.rs (3)
3576-3581:⚠️ Potential issue | 🟠 MajorDon't log the full Responses body at warn level.
raw_body = %bodypersists assistant text and tool arguments whenever this path is hit. Log shape metadata or a truncated/redacted sample instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 3576 - 3581, The warn log in tracing::warn! (inside the empty responses API branch) currently emits raw_body = %body which can leak assistant text and tool args; change this to avoid logging the full body by logging safe metadata instead (e.g., body_length = body.len(), body_preview = &body[..min(200, body.len())] or a redacted/hash value) and keep existing fields provider_label, output_items.len(), and output_types; update the tracing::warn! call that references provider_label, output_items, output_types, and body to use the new truncated/redacted preview or length/hash while removing the full raw_body value.
3350-3368:⚠️ Potential issue | 🟠 MajorDon't recurse through every unknown nested key.
This fallback now turns any unrecognized nested field into user-visible text, and the new test locks that in. That will eventually surface provider metadata that is not assistant content. Please allowlist only the wrapper fields you explicitly want to expose, such as
payload.value, instead of blanket recursion.Also applies to: 3449-3467, 4265-4299
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 3350 - 3368, The current loop in collect_openai_text_content is recursing into every unknown map key and turning provider metadata into visible text; change the logic to only recurse into explicitly allowlisted wrapper fields (for example accept "payload" and then only descend into "payload.value" or other sanctioned wrapper keys) and skip all other unknown keys. Update the matches(...) condition in collect_openai_text_content to only continue for the small set of wrapper field names you want exposed and add targeted handling for allowed nested paths (e.g., inspect "payload" and then "value"), and apply the same fix to the equivalent blocks in the other similar locations referenced in this file to prevent blanket recursion into provider metadata.
3568-3587:⚠️ Potential issue | 🟠 MajorUnsupported Responses payloads still bypass retry/fallback.
attempt_with_retries()only retries onErr. ReturningOk(AssistantContent::Text(""))here makes empty or unknown output look successful, and the new tests codify that behavior, so the fallback chain never runs.Also applies to: 4331-4369
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 3568 - 3587, The match on OneOrMany::many(assistant_content) treats parse failures as a successful empty AssistantContent (OneOrMany::one(AssistantContent::Text(...))), which prevents attempt_with_retries() from retrying or falling back; instead, change the Err(_) arm to propagate an error so attempt_with_retries() sees a failure (e.g., return or Err(...) with a clear error variant/message referencing the provider and raw body). Locate the match producing choice (the OneOrMany::many / Err(_) branch) and replace the Ok-returning empty text with propagating an error (or map the parse failure into the function's error type), keeping the tracing::warn log but ensuring the branch yields an Err so retry/fallback logic runs; apply the same change in the analogous block around the OneOrMany usage at the other location (the 4331-4369 region).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/llm/model.rs`:
- Around line 938-954: The streaming path omits the reasoning field: replicate
the same reasoning-effort logic used in call_openai_responses() inside
stream_openai_responses() so the streaming Requests body includes
body["reasoning"] = { "effort": openai_effort }; specifically, reuse the
routing-based effort resolution (using
self.routing.thinking_effort_for_model(&self.model_name) with the same mapping
of "max"|"high" => "high", "medium" => "medium", "low" => "low", default =>
"medium") and set the resulting openai_effort into the JSON body built by
stream_openai_responses() so streaming ChatGPT/Codex requests get the same
reasoning override.
- Around line 938-954: The current hardcoded mapping that converts effort values
(e.g., "max"|"high" => "high") incorrectly collapses gpt-5.4 family semantics;
replace the manual match in the block that sets body["reasoning"] (the code that
computes openai_effort from effort) with the model-aware helper
openai_reasoning_effort(&self.model_name, effort) so normalization respects
model-specific values like "xhigh" for gpt-5.4 and gpt-5.4-pro, and add a
targeted unit test asserting that gpt-5.4 and gpt-5.4-pro normalize "max" →
"xhigh" (using the same test harness/pattern as existing tests for
openai_reasoning_effort).
---
Outside diff comments:
In `@src/llm/model.rs`:
- Around line 3612-3657: The SSE fallback parser currently uses a single mutable
current_tool_call which causes interleaved tool-call argument deltas
(response.function_call_arguments.delta) to be merged incorrectly; replace this
with a map keyed by item_id (e.g., HashMap<String, serde_json::Value>) so each
in-progress tool call is tracked separately, update logic in the loop handling
response.output_item.added to insert item.clone() into the map by its item_id,
modify response.function_call_arguments.delta handling to look up the correct
entry by item_id and append the delta to that entry's "arguments", and on
response.output_item.done remove the specific entry by item_id and push it into
accumulated_tool_calls (mirroring the HashMap<String, OpenAiStreamingToolCall>
approach used in process_openai_responses_stream_raw_event()).
---
Duplicate comments:
In `@interface/src/routes/Settings.tsx`:
- Around line 673-701: handleStartCopilotOAuth currently passes result.state
only into monitorCopilotOAuth and doesn't persist it, so closing the dialog
can't cancel the server-side session; update handleStartCopilotOAuth to save
result.state (e.g., via a new state or ref like copilotOAuthStateRef or
setCopilotOAuthState) when setting copilot device info, and modify the dialog
cleanup/close path (the same place that aborts copilotOAuthAbortRef) to read
that persisted state and call the backend cancellation endpoint (or trigger the
existing cancellation mutation) with that state before/after aborting the local
poller; ensure monitorCopilotOAuth still receives the state for polling but the
persisted token is available for the cleanup handler to cancel the server
session.
In `@src/api/providers.rs`:
- Around line 891-928: finalize_copilot_oauth currently persists
github_copilot_oauth.json via crate::github_copilot_oauth::save_credentials and
updates the in-memory LlmManager via llm_manager.set_copilot_oauth_credentials
before updating config.toml, which can leave the provider half-installed if the
config write fails; change the flow so the config.toml update (reading/parsing,
apply_model_routing, writing) succeeds first and only then call save_credentials
and set_copilot_oauth_credentials, or alternatively perform save_credentials and
set_copilot_oauth_credentials but on any subsequent error roll them back (remove
saved file via crate::github_copilot_oauth::delete or equivalent and clear
LlmManager credentials via set_copilot_oauth_credentials(None) or a clear
method); ensure provider_setup_tx is only sent after all steps succeed and call
refresh_defaults_config only after successful commit.
- Around line 1856-1881: The removal branch for "github-copilot-oauth" must also
cancel any in-flight Copilot device OAuth sessions so a concurrently running
run_copilot_device_oauth_background() task cannot recreate credentials after
removal; add logic to clear or cancel COPILOT_DEVICE_OAUTH_SESSIONS (or call a
new cancel_copilot_device_oauth_sessions() helper) and, if you track a
background task handle, abort or signal it to stop before deleting files and
returning. Do this alongside the existing
manager.clear_copilot_oauth_credentials()/clear_copilot_token() calls so
sessions are invalidated atomically and no background task can write credentials
back after removal.
In `@src/github_copilot_oauth.rs`:
- Around line 227-249: The Unix write block uses OpenOptionsExt::mode(0o600)
which only applies on create, so after truncating/writing/sync_all() you must
explicitly reapply restrictive permissions; add a call to
std::fs::set_permissions(&path, std::fs::Permissions::from_mode(0o600)) (using
std::os::unix::fs::PermissionsExt::from_mode) immediately after file.sync_all()
to enforce 0o600 for existing files.
In `@src/llm/model.rs`:
- Around line 3576-3581: The warn log in tracing::warn! (inside the empty
responses API branch) currently emits raw_body = %body which can leak assistant
text and tool args; change this to avoid logging the full body by logging safe
metadata instead (e.g., body_length = body.len(), body_preview =
&body[..min(200, body.len())] or a redacted/hash value) and keep existing fields
provider_label, output_items.len(), and output_types; update the tracing::warn!
call that references provider_label, output_items, output_types, and body to use
the new truncated/redacted preview or length/hash while removing the full
raw_body value.
- Around line 3350-3368: The current loop in collect_openai_text_content is
recursing into every unknown map key and turning provider metadata into visible
text; change the logic to only recurse into explicitly allowlisted wrapper
fields (for example accept "payload" and then only descend into "payload.value"
or other sanctioned wrapper keys) and skip all other unknown keys. Update the
matches(...) condition in collect_openai_text_content to only continue for the
small set of wrapper field names you want exposed and add targeted handling for
allowed nested paths (e.g., inspect "payload" and then "value"), and apply the
same fix to the equivalent blocks in the other similar locations referenced in
this file to prevent blanket recursion into provider metadata.
- Around line 3568-3587: The match on OneOrMany::many(assistant_content) treats
parse failures as a successful empty AssistantContent
(OneOrMany::one(AssistantContent::Text(...))), which prevents
attempt_with_retries() from retrying or falling back; instead, change the Err(_)
arm to propagate an error so attempt_with_retries() sees a failure (e.g., return
or Err(...) with a clear error variant/message referencing the provider and raw
body). Locate the match producing choice (the OneOrMany::many / Err(_) branch)
and replace the Ok-returning empty text with propagating an error (or map the
parse failure into the function's error type), keeping the tracing::warn log but
ensuring the branch yields an Err so retry/fallback logic runs; apply the same
change in the analogous block around the OneOrMany usage at the other location
(the 4331-4369 region).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 56b6d102-9c17-4000-9f47-e6c7b97d3ee3
📒 Files selected for processing (17)
.gitignoreinterface/src/api/client.tsinterface/src/api/schema.d.tsinterface/src/api/types.tsinterface/src/components/Markdown.tsxinterface/src/components/ModelSelect.tsxinterface/src/components/WebChatPanel.tsxinterface/src/lib/providerIcons.tsxinterface/src/routes/Settings.tsxsrc/agent/channel.rssrc/api/providers.rssrc/api/server.rssrc/config/load.rssrc/github_copilot_oauth.rssrc/lib.rssrc/llm/manager.rssrc/llm/model.rs
✅ Files skipped from review due to trivial changes (7)
- .gitignore
- interface/src/components/ModelSelect.tsx
- interface/src/lib/providerIcons.tsx
- src/lib.rs
- src/config/load.rs
- interface/src/api/types.ts
- src/agent/channel.rs
🚧 Files skipped from review as they are similar to previous changes (4)
- interface/src/components/Markdown.tsx
- interface/src/api/client.ts
- src/llm/manager.rs
- interface/src/api/schema.d.ts
| // Set reasoning effort for models that support it. Without this, | ||
| // some endpoints (notably ChatGPT Plus OAuth) default to "none", | ||
| // which produces empty output arrays. | ||
| let effort = self | ||
| .routing | ||
| .as_ref() | ||
| .map(|r| r.thinking_effort_for_model(&self.model_name)) | ||
| .unwrap_or("auto"); | ||
| let openai_effort = match effort { | ||
| "max" | "high" => "high", | ||
| "medium" => "medium", | ||
| "low" => "low", | ||
| // "auto" or anything else → "medium" as a safe default that | ||
| // guarantees the model actually produces output. | ||
| _ => "medium", | ||
| }; | ||
| body["reasoning"] = serde_json::json!({ "effort": openai_effort }); |
There was a problem hiding this comment.
Set reasoning on the streaming Responses request too.
This only fixes call_openai_responses(). stream_openai_responses() builds a separate body and still omits reasoning, so the streaming ChatGPT/Codex path can keep hitting the same empty-output behavior this block is trying to avoid.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/model.rs` around lines 938 - 954, The streaming path omits the
reasoning field: replicate the same reasoning-effort logic used in
call_openai_responses() inside stream_openai_responses() so the streaming
Requests body includes body["reasoning"] = { "effort": openai_effort };
specifically, reuse the routing-based effort resolution (using
self.routing.thinking_effort_for_model(&self.model_name) with the same mapping
of "max"|"high" => "high", "medium" => "medium", "low" => "low", default =>
"medium") and set the resulting openai_effort into the JSON body built by
stream_openai_responses() so streaming ChatGPT/Codex requests get the same
reasoning override.
🧩 Analysis chain
🌐 Web query:
What reasoning.effort values are currently supported for OpenAI gpt-5.4 and gpt-5.4-pro models in the Responses API / models documentation, and how should a config value of "max" be normalized for each?
💡 Result:
Supported reasoning.effort values (Responses API / model docs):
gpt-5.4:none(default),low,medium,high,xhigh. [1]gpt-5.4-pro:medium,high,xhigh. [2]
How to normalize config value "max":
- For
gpt-5.4: normalize"max"→"xhigh"(highest supported). [1] - For
gpt-5.4-pro: normalize"max"→"xhigh"(highest supported). [2]
Sources
[1] OpenAI API model docs — GPT-5.4: https://developers.openai.com/api/docs/models/gpt-5.4
[2] OpenAI API model docs — GPT-5.4 pro: https://developers.openai.com/api/docs/models/gpt-5.4-pro
Use model-aware reasoning effort normalization instead of generic mapping.
Lines 938-954 use a generic "max" | "high" => "high" mapping that regresses the intentional GPT-5.4 family handling. OpenAI's gpt-5.4 and gpt-5.4-pro models support "xhigh" as a valid reasoning.effort value; "max" should normalize to "xhigh" for both models, not collapse to "high". Delegate to a model-aware helper (similar to the existing openai_reasoning_effort function elsewhere in this file) and cover with a targeted test for the gpt-5.4 family.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/llm/model.rs` around lines 938 - 954, The current hardcoded mapping that
converts effort values (e.g., "max"|"high" => "high") incorrectly collapses
gpt-5.4 family semantics; replace the manual match in the block that sets
body["reasoning"] (the code that computes openai_effort from effort) with the
model-aware helper openai_reasoning_effort(&self.model_name, effort) so
normalization respects model-specific values like "xhigh" for gpt-5.4 and
gpt-5.4-pro, and add a targeted unit test asserting that gpt-5.4 and gpt-5.4-pro
normalize "max" → "xhigh" (using the same test harness/pattern as existing tests
for openai_reasoning_effort).
Summary
Manageafter configuration instead of always showingSign inVerification
./scripts/preflight.sh./scripts/gate-pr.shbunx tsc --noEmit(ininterface/)bun run build(ininterface/)