Skip to content

fix(download): 修复系统代理和ghproxy同时启用#2233

Open
ShadowLemoon wants to merge 1 commit into
mainfrom
fix/system-proxy
Open

fix(download): 修复系统代理和ghproxy同时启用#2233
ShadowLemoon wants to merge 1 commit into
mainfrom
fix/system-proxy

Conversation

@ShadowLemoon
Copy link
Copy Markdown
Collaborator

@ShadowLemoon ShadowLemoon commented May 5, 2026

Summary by CodeRabbit

发布说明

  • Bug Fixes

    • 改进了文件下载的取消处理机制,确保下载失败或被中断时能正确清理部分文件
  • 性能改进

    • 优化了下载进度报告的准确性和可靠性

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 5, 2026

📝 Walkthrough

概览

download_file 函数从 urllib.request.urlretrieve 迁移到手动流式下载,采用 pathlib.Path 处理文件系统操作,使用 ProxyHandler 构建请求开启器,重构进度报告回调签名,在分块读取循环内处理取消操作,并在异常或取消时清理部分下载文件。

变更

HTTP 下载机制重构

层级 / 文件 摘要
导入与初始化
src/one_dragon/utils/http_utils.py (第 1-7 行)
添加 from pathlib import Path 导入以支持基于路径对象的文件系统操作。
代理与路径设置
src/one_dragon/utils/http_utils.py (第 22-31 行)
将全局代理安装替换为从 ProxyHandler 构建 opener;将 save_file_path 转换为 Path 对象,创建父目录并配置 exist_ok=True
进度回调重构
src/one_dragon/utils/http_utils.py (第 32-50 行)
修改嵌套的 log_download_progress 助手函数,改为接收 (downloaded_bytes, total_size) 参数,使用字节计数计算进度百分比;保留节流日志逻辑。
流式下载核心
src/one_dragon/utils/http_utils.py (第 58-80 行)
urlretrieve 替换为手动流式实现,使用 opener.open(..., timeout=60) 按固定大小分块读取;在循环内检查 progress_signal['signal'] == 'cancel' 触发取消异常;每个分块后更新进度回调。
异常与清理处理
src/one_dragon/utils/http_utils.py (第 81-92 行)
DownloadCancelledError 和通用异常分别处理,均通过 save_path.unlink(missing_ok=True) 清理部分下载文件,调用进度回调 progress_callback(0, msg) 通知中止,返回 False

估算代码审查工作量

🎯 3 (中等) | ⏱️ ~20 分钟

兔子的诗

🐰 **下载的新舞步**

urlretrieve 到溪流,
字节舞蹈逐块飞,
路径与代理齐伏首,
取消清扫一瞬间,
进度提报更流畅! 🎉

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed PR标题准确地反映了主要变更内容 - 修复系统代理和ghproxy同时启用的问题,与代码更改中的代理处理改进直接相关。
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/system-proxy

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/one_dragon/utils/http_utils.py`:
- Around line 58-59: Validate the download_url scheme before calling opener.open
to prevent non-HTTP(S) schemes (e.g., file://) from being used: parse
download_url (via urllib.parse.urlparse) and ensure url.scheme is "http" or
"https", otherwise raise an exception or log and abort; update the code path
around the existing download logic (the variables/functions download_url,
opener.open, and the code that calls urllib.request.build_opener/ProxyHandler)
to perform this check early and fail fast if the scheme is invalid.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 59e1b64d-965c-45ef-a2b5-28e8538770d4

📥 Commits

Reviewing files that changed from the base of the PR and between 970a8b2 and ebca0e4.

📒 Files selected for processing (1)
  • src/one_dragon/utils/http_utils.py

Comment on lines +58 to +59
request = urllib.request.Request(download_url)
with opener.open(request, timeout=60) as response, save_path.open('wb') as file:
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot May 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

建议校验 download_url 的协议方案,防止非预期的 file:// 等 scheme 被访问。

urllib.request.build_opener(ProxyHandler(...)) 默认会包含 FileHandler,因此 opener.open() 可接受 file:// 或其他自定义 scheme 的 URL。若 YAML 配置被篡改或意外填入非 HTTP(S) 地址,将导致本地文件被读取。Ruff S310 已对此发出警告。

🛡️ 建议:在发起请求前校验 URL scheme
+        from urllib.parse import urlparse
+        parsed = urlparse(download_url)
+        if parsed.scheme not in ('http', 'https'):
+            raise ValueError(f"不支持的下载协议:{parsed.scheme}")
+
         request = urllib.request.Request(download_url)
         with opener.open(request, timeout=60) as response, save_path.open('wb') as file:
🧰 Tools
🪛 Ruff (0.15.12)

[error] 58-58: Audit URL open for permitted schemes. Allowing use of file: or custom schemes is often unexpected.

(S310)

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/one_dragon/utils/http_utils.py` around lines 58 - 59, Validate the
download_url scheme before calling opener.open to prevent non-HTTP(S) schemes
(e.g., file://) from being used: parse download_url (via urllib.parse.urlparse)
and ensure url.scheme is "http" or "https", otherwise raise an exception or log
and abort; update the code path around the existing download logic (the
variables/functions download_url, opener.open, and the code that calls
urllib.request.build_opener/ProxyHandler) to perform this check early and fail
fast if the scheme is invalid.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

能不能直接startswith http校验

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ShadowLemoon,可以用 startswith,但建议用元组形式来精确匹配,避免误匹配 httpfoo:// 这类异常 scheme:

if not download_url.startswith(('http://', 'https://')):
    raise ValueError(f"不支持的下载协议:{download_url}")

request = urllib.request.Request(download_url)
with opener.open(request, timeout=60) as response, save_path.open('wb') as file:

这样既不需要额外 import,也能精确区分 http://https://,比 startswith('http') 更严谨(纯 startswith('http') 也会放行 httpxxx:// 类格式)。


🧠 Learnings used
Learnt from: kawayiYokami
Repo: OneDragon-Anything/ZenlessZoneZero-OneDragon PR: 1931
File: src/one_dragon/base/controller/pc_controller_base.py:64-71
Timestamp: 2026-01-13T18:59:27.574Z
Learning: 当成员使用 property 装饰器将方法变为只读属性时,应通过属性访问而非调用,示例:self.game_win.is_win_active 而非 self.game_win.is_win_active()。在进行代码审查时应检查对带有 property 的成员的访问方式是否正确,并避免用括号调用属性。

Learnt from: JoshCai233
Repo: OneDragon-Anything/ZenlessZoneZero-OneDragon PR: 2219
File: src/one_dragon_qt/view/standalone_app_run_interface.py:210-215
Timestamp: 2026-05-03T07:24:35.764Z
Learning: When implementing Chinese (pinyin) string sorting on Windows in this PySide6/Qt codebase, avoid relying on `QCollator(Q...Locale(Language.Chinese, Country.China))` for correct pinyin collation. Instead, use Python’s `locale` collation via `locale.setlocale(locale.LC_COLLATE, "zh_CN.UTF-8")` and `locale.strxfrm(...)` to get correct pinyin ordering. Also, never use a bare `except:` for this logic—narrow it to `except locale.Error:` (or otherwise catch the specific expected exception type).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant