diff --git a/.github/AGENTS.md b/.github/AGENTS.md index a114120..304d197 100644 --- a/.github/AGENTS.md +++ b/.github/AGENTS.md @@ -19,12 +19,14 @@ Applies to `.github/` and repository pull-request operations. ## CI Alignment -When changing CI-sensitive behavior, keep local validation aligned with `.github/workflows/ci.yml`. +When changing CI-sensitive behavior, keep local validation aligned with [`Makefile`](Makefile) at the repo root. -Current `test-and-lint` gate includes: +**Local bar before push (authoritative for contributors):** `make pre-commit` — runs Rust `fmt-check`, `clippy`, `test`, plus `console-web` lint and Prettier check (see `Makefile`). +**CI workflow** [`.github/workflows/ci.yml`](workflows/ci.yml) `test-and-lint` job currently runs: + +- `cargo nextest run --all --no-tests pass` and `cargo test --all --doc` - `cargo fmt --all --check` - `cargo clippy --all-features -- -D warnings` -- `cargo test --all` -- `cd console-web && npm run lint` -- `cd console-web && npx prettier --check "**/*.{ts,tsx,js,jsx,json,css,md}"` + +It does **not** run `console-web` checks; still run **`make pre-commit` locally** before opening a PR so frontend changes are validated. diff --git a/.github/dependabot.yml b/.github/dependabot.yml index dc90f02..9f6b7bd 100755 --- a/.github/dependabot.yml +++ b/.github/dependabot.yml @@ -24,7 +24,7 @@ updates: interval: "daily" - package-ecosystem: "cargo" schedule: - interval: "daily" # 这里添加了 schedule + interval: "daily" directory: "/" groups: deps: diff --git a/CHANGELOG.md b/CHANGELOG.md index 9e26f22..a0f4ae4 100755 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,6 +7,18 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ## [Unreleased] +### Documentation + +- Aligned [`CLAUDE.md`](CLAUDE.md) and [`ROADMAP.md`](ROADMAP.md) with current code: Tenant status conditions and StatefulSet updates on the successful reconcile path are documented as implemented; remaining work (status on early errors, integration tests, rollout extras) is listed explicitly. +- Clarified the documentation map: [`CONTRIBUTING.md`](CONTRIBUTING.md) (quality gates and CI alignment), [`docs/DEVELOPMENT.md`](docs/DEVELOPMENT.md) (environment setup), [`docs/DEVELOPMENT-NOTES.md`](docs/DEVELOPMENT-NOTES.md) (historical notes, not normative). +- Updated [`examples/README.md`](examples/README.md): Tenant Services document S3 **9000** and RustFS Console **9001**; distinguished the Operator HTTP Console (default **9090**, `cargo run -- console`) from the Tenant `{tenant}-console` Service. +- Standardized [`README.md`](README.md), [`scripts/README.md`](scripts/README.md), and shell scripts under `scripts/` to English for consistency with architecture and development docs. +- Translated Rust doc and line comments in [`src/console/`](src/console/) to English (no behavior change). + +### Fixed + +- **`console-web` / `make pre-commit`**: `npm run lint` now runs `eslint .` (bare `eslint` only printed CLI help). Added `format` / `format:check` scripts; [`Makefile`](Makefile) `console-fmt` and `console-fmt-check` call them so Prettier resolves from `node_modules` after `npm install` in `console-web/`. + ### Added #### **StatefulSet Reconciliation Improvements** (2025-12-03, Issue #43) diff --git a/CLAUDE.md b/CLAUDE.md index b28423c..474c3b5 100755 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -7,7 +7,7 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co This is a Kubernetes operator for RustFS, written in Rust using the `kube-rs` library. The operator manages a custom resource `Tenant` (CRD) that provisions and manages RustFS storage clusters in Kubernetes. **Current Status**: v0.1.0 (pre-release) - Early development, not yet production-ready -**Test Coverage**: 25 tests, all passing ✅ +**Test Coverage**: 47 library unit tests, all passing ✅ (run `cargo test --all` for current count) ## Build and Development Commands @@ -31,8 +31,11 @@ cargo run -- crd # Generate CRD YAML to file cargo run -- crd -f tenant-crd.yaml -# Run the controller (requires Kubernetes cluster access) +# Run the Kubernetes controller (requires cluster access) cargo run -- server + +# Run the operator HTTP console API (default port 9090; used by console-web) +cargo run -- console ``` ### Docker @@ -41,7 +44,7 @@ cargo run -- server docker build -t operator . ``` -Note: The Dockerfile uses a multi-stage build with Rust 1.91-alpine. +Note: The Dockerfile uses a multi-stage build (`rust:bookworm`, cargo-chef); the final image defaults to `debian:bookworm-slim`. ### Scripts (deploy / cleanup / check) Shell scripts are under `scripts/` and grouped by purpose. Run from project root (scripts will `cd` to project root automatically): @@ -85,7 +88,7 @@ See `docs/architecture-decisions.md` for detailed ADRs. ### Reconciliation Loop The operator follows the standard Kubernetes controller pattern: -- **Entry Point**: `src/main.rs` - CLI with two subcommands: `crd` and `server` +- **Entry Point**: `src/main.rs` - CLI subcommands: `crd`, `server` (controller), `console` (management API) - **Controller**: `src/lib.rs:run()` - Sets up the controller that watches `Tenant` resources and owned resources (ConfigMaps, Secrets, ServiceAccounts, Pods, StatefulSets) - **Reconciliation Logic**: `src/reconcile.rs:reconcile_rustfs()` - Main reconciliation function that creates/updates Kubernetes resources for a Tenant - **Error Handling**: `src/reconcile.rs:error_policy()` - Intelligent retry intervals based on error type: @@ -170,8 +173,9 @@ The `Tenant` type in `src/types/v1alpha1/tenant.rs` has factory methods for crea ### Status Management - **Status Types**: `src/types/v1alpha1/status/` - Status structures including state, pool status, and certificate status -- The status is updated via the Kubernetes status subresource -- **TODO at `reconcile.rs:92`**: Implement comprehensive status condition updates on errors (Ready, Progressing, Degraded) +- The status is updated via the Kubernetes status subresource (`Context::update_status`, with a single retry on conflict) +- **Implemented (successful reconcile path)**: Aggregates per-pool StatefulSet status, sets `Ready` / `Progressing` / `Degraded` conditions, overall `current_state`, and pool entries—see `reconcile_rustfs()` in `src/reconcile.rs` +- **Remaining (Issue #42 follow-up)**: When reconcile returns early with `Err` (e.g. credential/KMS validation, immutable field violations), status is not always updated to reflect that failure; consider setting conditions or state before returning ### Utilities @@ -188,7 +192,7 @@ The `Tenant` type in `src/types/v1alpha1/tenant.rs` has factory methods for crea ## Code Structure Notes -- Uses `kube-rs` with specific git revisions for `k8s-openapi` and `kube` crates +- Uses `kube` and `k8s-openapi` from crates.io (see `Cargo.toml` for versions) - Kubernetes version target: v1.30 - Error handling uses the `snafu` crate for structured error types - All files include Apache 2.0 license headers @@ -196,21 +200,21 @@ The `Tenant` type in `src/types/v1alpha1/tenant.rs` has factory methods for crea ## Important Dependencies -- **kube** and **k8s-openapi**: Pinned to specific git revisions (not crates.io versions) - - TODO: Evaluate migration to crates.io versions +- **kube** / **k8s-openapi**: Versions pinned in `Cargo.toml` (crates.io) - Uses Rust edition 2024 -- Build script (`build.rs`) generates build metadata using the `built` crate +- Build script (`build.rs`) generates build metadata using the `shadow-rs` crate ## Known Issues and TODOs ### High Priority - [x] ~~**Secret-based credential management**~~ ✅ **COMPLETED** (2025-11-15, Issue #41) -- [ ] **Status condition management** (`src/reconcile.rs:92`, Issue #42) -- [ ] **StatefulSet reconciliation** (`reconcile.rs`) - Creation works, updates need refinement (Issue #43) -- [ ] **Integration tests** - Only unit tests currently exist +- [x] ~~**Tenant status conditions on successful reconcile**~~ — `Ready` / `Progressing` / `Degraded`, pool-level status (see `reconcile.rs`) +- [ ] **Status on reconciliation failures** — Early error returns may not patch Tenant status (Issue #42 follow-up) +- [ ] **StatefulSet advanced rollout** — Safe updates and validation exist; rollback, richer strategies, and Issue #43 polish remain +- [ ] **Integration tests** — Only unit tests in-repo today ### Medium Priority -- [ ] Status subresource update retry logic improvements +- [ ] Status subresource update retry beyond single conflict retry (`context.rs`) - [ ] TLS certificate rotation automation - [ ] Configuration validation enhancements (storage class existence, node selector validity) @@ -220,7 +224,7 @@ The `Tenant` type in `src/types/v1alpha1/tenant.rs` has factory methods for crea - **ROADMAP.md** - Development roadmap organized by focus areas (Core Stability, Advanced Features, Enterprise Features, Production Ready) - **docs/architecture-decisions.md** - ADRs documenting key architectural decisions - **docs/multi-pool-use-cases.md** - Comprehensive guide for multi-pool scenarios -- **docs/DEVELOPMENT-NOTES.md** - Development workflow and contribution guidelines +- **docs/DEVELOPMENT-NOTES.md** - Historical analysis and design notes (not the primary dev guide; see `docs/DEVELOPMENT.md` and `CONTRIBUTING.md`) ## Examples @@ -251,10 +255,8 @@ See `examples/README.md` for comprehensive usage guide. ## Development Priorities (from ROADMAP.md) ### Core Stability (Highest Priority) -- Secret-based credential management -- Status condition management (Ready, Progressing, Degraded) -- StatefulSet update and rollout management -- Improved error handling and observability +- Status on failed reconcile paths and stronger status retry +- StatefulSet rollout polish (rollback, strategies) and observability (metrics) - Integration test suite ### Advanced Features diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index be58a46..eac4ff2 100755 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,5 +1,13 @@ # RustFS Development Guide +## Documentation map + +- **This file (`CONTRIBUTING.md`)** — Authoritative for **code quality**, commit workflow, formatting, and alignment with `make pre-commit` and CI. When instructions conflict, prefer this file plus [`Makefile`](Makefile) and [`.github/workflows/ci.yml`](.github/workflows/ci.yml). + +- **[`docs/DEVELOPMENT.md`](docs/DEVELOPMENT.md)** — Local environment setup (Kubernetes, kind, optional tools), IDE hints, and longer workflows. It **does not** redefine the quality gates above; run `make pre-commit` as the single local bar before pushing. + +- **[`docs/DEVELOPMENT-NOTES.md`](docs/DEVELOPMENT-NOTES.md)** — Historical design notes and analysis sessions (not a normative spec). For current behavior, use the source tree, [`CHANGELOG.md`](CHANGELOG.md), and [`CLAUDE.md`](CLAUDE.md). + ## 📋 Code Quality Requirements ### 🔧 Code Formatting Rules @@ -8,9 +16,9 @@ #### Pre-commit Requirements -Before every commit, you **MUST**: +Before every commit, you **MUST** pass the same checks as `make pre-commit` (see below). In practice, the steps are: -1. **Format your code**: +1. **Format your code** (or rely on `make fmt-check` to fail if not formatted): ```bash cargo fmt --all @@ -25,63 +33,56 @@ Before every commit, you **MUST**: 3. **Pass clippy checks**: ```bash - cargo clippy --all-targets --all-features -- -D warnings + cargo clippy --all-features -- -D warnings + ``` + +4. **Run tests**: + + ```bash + cargo test --all ``` -4. **Ensure compilation**: +5. **Console (frontend)** — from repo root: ```bash - cargo check --all-targets + cd console-web && npm run lint + cd console-web && npx prettier --check "**/*.{ts,tsx,js,jsx,json,css,md}" ``` #### Quick Commands -We provide convenient Makefile targets for common tasks: +Targets are defined in [`Makefile`](Makefile). Use these from the **repository root**: ```bash -# Format all code +# Format all Rust code make fmt -# Check if code is properly formatted +# Check Rust formatting (no writes) make fmt-check -# Run clippy checks +# Clippy (Rust) make clippy -# Run compilation check -make check - -# Run tests +# Rust tests make test -# Run all pre-commit checks (format + clippy + check + test) -make pre-commit +# Frontend: ESLint + Prettier check (requires npm install in console-web/) +make console-lint +make console-fmt-check -# Setup git hooks (one-time setup) -make setup-hooks +# Full gate before push (Rust + console-web): same as project / AGENTS.md rules +make pre-commit ``` -### 🔒 Automated Pre-commit Hooks - -This project includes a pre-commit hook that automatically runs before each commit to ensure: - -- ✅ Code is properly formatted (`cargo fmt --all --check`) -- ✅ No clippy warnings (`cargo clippy --all-targets --all-features -- -D warnings`) -- ✅ Code compiles successfully (`cargo check --all-targets`) - -#### Setting Up Pre-commit Hooks - -Run this command once after cloning the repository: +Optional quick compile (not a separate `make` target): ```bash -make setup-hooks +cargo check --all-targets ``` -Or manually: +### 🔒 Git hooks (optional) -```bash -chmod +x .git/hooks/pre-commit -``` +The repository does **not** ship a `make setup-hooks` target. To run checks automatically on `git commit`, add your own `.git/hooks/pre-commit` that invokes `make pre-commit` (or the individual commands above). ### 📝 Formatting Configuration @@ -95,11 +96,7 @@ single_line_let_else_max_width = 100 ### 🚫 Commit Prevention -If your code doesn't meet the formatting requirements, the pre-commit hook will: - -1. **Block the commit** and show clear error messages -2. **Provide exact commands** to fix the issues -3. **Guide you through** the resolution process +If your code doesn't meet the formatting requirements, CI or local checks will fail with clear messages. Example output when formatting fails: @@ -148,22 +145,12 @@ Configure your IDE to: ### ❗ Important Notes - **Never bypass formatting checks** - they are there for a reason -- **All CI/CD pipelines** will also enforce these same checks -- **Pull requests** will be automatically rejected if formatting checks fail +- **CI and `make pre-commit`** should stay aligned; see [`Makefile`](Makefile) and [`.github/workflows/ci.yml`](.github/workflows/ci.yml) +- **Pull requests** may be rejected if checks fail - **Consistent formatting** improves code readability and reduces merge conflicts ### 🆘 Troubleshooting -#### Pre-commit hook not running? - -```bash -# Check if hook is executable -ls -la .git/hooks/pre-commit - -# Make it executable if needed -chmod +x .git/hooks/pre-commit -``` - #### Formatting issues? ```bash diff --git a/Makefile b/Makefile index 7a6fb27..70c72d6 100644 --- a/Makefile +++ b/Makefile @@ -56,13 +56,13 @@ test: console-lint: cd console-web && npm run lint -# 前端 Prettier 自动格式化 +# 前端 Prettier 自动格式化(需先在 console-web 下 npm install) console-fmt: - cd console-web && npx prettier --write "**/*.{ts,tsx,js,jsx,json,css,md}" + cd console-web && npm run format # 前端 Prettier 格式检查(仅检查不修改) console-fmt-check: - cd console-web && npx prettier --check "**/*.{ts,tsx,js,jsx,json,css,md}" + cd console-web && npm run format:check # 构建 build: diff --git a/README.md b/README.md index 451013a..819699b 100755 --- a/README.md +++ b/README.md @@ -1,16 +1,16 @@ # RustFS Kubernetes Operator -RustFS k8s operator(开发中,尚未可用于生产)。 +RustFS Kubernetes operator (under development; not production-ready). -## 项目结构概览 +## Repository layout -- **scripts/** — 部署/清理/检查脚本(见 [scripts/README.md](scripts/README.md)) - - `scripts/deploy/` — 一键部署(Kind + Operator + Tenant) - - `scripts/cleanup/` — 资源清理 - - `scripts/check/` — 集群与 Tenant 状态检查 -- **deploy/** — K8s/Helm 部署清单与 Kind 配置 - - `deploy/rustfs-operator/` — Helm Chart - - `deploy/k8s-dev/` — 开发用 K8s YAML - - `deploy/kind/` — Kind 集群配置(如 4 节点) -- **examples/** — Tenant CR 示例 -- **docs/** — 架构与开发文档 +- **scripts/** — Deploy, cleanup, and check scripts (see [scripts/README.md](scripts/README.md)) + - `scripts/deploy/` — One-shot deploy (Kind + Operator + Tenant) + - `scripts/cleanup/` — Resource cleanup + - `scripts/check/` — Cluster and Tenant status checks +- **deploy/** — Kubernetes / Helm manifests and Kind configs + - `deploy/rustfs-operator/` — Helm chart + - `deploy/k8s-dev/` — Development Kubernetes YAML + - `deploy/kind/` — Kind cluster configs (e.g. 4-node) +- **examples/** — Sample Tenant CRs +- **docs/** — Architecture and development documentation diff --git a/ROADMAP.md b/ROADMAP.md index 51d1f54..d4e6b4f 100755 --- a/ROADMAP.md +++ b/ROADMAP.md @@ -2,7 +2,7 @@ This document outlines the development roadmap for the RustFS Kubernetes Operator. The roadmap is organized by release versions and includes features, improvements, and technical debt items. -**Last Updated**: 2025-11-15 +**Last Updated**: 2026-03-28 **Current Version**: 0.1.0 (pre-release) --- @@ -23,14 +23,17 @@ This document outlines the development roadmap for the RustFS Kubernetes Operato - [x] Certificate and TLS utilities (RSA, ECDSA, Ed25519) - [x] Kubernetes events for reconciliation actions - [x] Test infrastructure with helper utilities +- [x] Tenant status: conditions (`Ready`, `Progressing`, `Degraded`), overall state, per-pool status from StatefulSets (successful reconcile path) +- [x] StatefulSet create/update with safe update validation and apply when spec changes +- [x] Operator HTTP console API and `console` CLI subcommand; `console-web` management UI ### 🔧 Known Issues -- [ ] StatefulSet reconciliation incomplete (creation works, updates need refinement) -- [ ] Status subresource updates need retry logic improvements -- [ ] No integration tests yet (only unit tests) -- [ ] Error policy needs status condition updates +- [ ] No integration or E2E tests in-repo (unit tests only) +- [ ] Tenant status not always updated when reconcile returns early with an error (e.g. credential/KMS validation) +- [ ] Status subresource patch: only one conflict retry; stronger backoff optional - [ ] TLS certificate rotation not automated +- [ ] Advanced StatefulSet rollout (rollback, extra strategy options) still open --- @@ -54,23 +57,14 @@ This document outlines the development roadmap for the RustFS Kubernetes Operato - ✅ Production-ready examples and documentation - See: `examples/secret-credentials-tenant.yaml`, Issue #41 -- [ ] **Status condition management** (`src/reconcile.rs:92`) - - Update Tenant status on reconciliation errors - - Implement standard Kubernetes condition types (Ready, Progressing, Degraded) - - Pool-level status tracking - - Health check integration - -- [ ] **StatefulSet update and rollout management** - - Safe StatefulSet updates with revision tracking - - Rolling update support with configurable strategies - - Rollback capabilities - - Update status reporting - +- [x] **Status conditions (happy path)** ✅ — `Ready` / `Progressing` / `Degraded`, pool-level status; see `src/reconcile.rs` +- [ ] **Status on reconciliation errors** — Surface failing state when reconcile exits early (credentials, validation, etc.); related to Issue #42 +- [x] **StatefulSet update (core)** ✅ — Validate immutable fields, apply when `statefulset_needs_update`; per-pool status from STS +- [ ] **StatefulSet rollout extras** — Rollback, configurable strategies, richer revision tracking (beyond current behavior) - [ ] **Improved error handling and observability** - - Structured logging with tracing levels - Prometheus metrics (reconciliation duration, error rates, pool health) - - Event recording for all lifecycle events - - Error categorization (transient vs permanent) + - Broader event coverage if gaps remain + - Note: structured logging (`tracing`) and `error_policy` requeue tiers exist today ### Medium Priority @@ -257,7 +251,7 @@ This document outlines the development roadmap for the RustFS Kubernetes Operato ### Medium Priority - [ ] Consider using `kube-runtime` finalizers API -- [ ] Evaluate using `k8s-openapi` from crates.io instead of git +- [x] ~~**k8s-openapi / kube from crates.io**~~ — Using crates.io versions (see `Cargo.toml`); keep pinned upgrades deliberate - [ ] Performance profiling and optimization - [ ] Memory usage analysis and optimization - [ ] Reduce binary size (investigate dependencies) @@ -275,11 +269,11 @@ This document outlines the development roadmap for the RustFS Kubernetes Operato ### Community Building -- [ ] Establish contributor guidelines (CONTRIBUTING.md) -- [ ] Set up issue templates and PR templates -- [ ] Create good-first-issue labels and documentation +- [x] **CONTRIBUTING.md** and contributor workflow (`make pre-commit`) +- [x] **Pull request template** (`.github/pull_request_template.md`) +- [ ] GitHub issue templates and `good-first-issue` labels - [ ] Regular community meetings (monthly) -- [ ] Developer documentation for architecture +- [x] **Core developer docs** (`docs/DEVELOPMENT.md`, `CLAUDE.md`, `docs/architecture-decisions.md`) — expand as needed ### Ecosystem Partnerships @@ -298,7 +292,7 @@ This document outlines the development roadmap for the RustFS Kubernetes Operato - **Kubernetes**: v1.27+ (current target: v1.30) - **Rust**: 1.91+ (edition 2024) - **RustFS**: Version compatibility matrix TBD -- **kube-rs**: Git revision (evaluate crates.io migration) +- **kube** / **k8s-openapi**: crates.io versions in `Cargo.toml` ### Optional Dependencies diff --git a/console-web/README.md b/console-web/README.md index 1b4f8ca..b34a46d 100755 --- a/console-web/README.md +++ b/console-web/README.md @@ -13,7 +13,7 @@ Open [http://localhost:3000](http://localhost:3000). The app calls the console A ### Local dev with backend -Run the operator console backend (e.g. `cargo run -- server` or another port). Then either: +Run the operator console HTTP API (e.g. `cargo run -- console`, default port **9090**). Then either: - Use same-origin: e.g. put frontend and backend behind one dev server that proxies `/api/v1` to the backend, and run the frontend with `NEXT_PUBLIC_API_BASE_URL=` (empty or `/api/v1`), or - Use different ports: run frontend on 3000, backend on 9090, and set `NEXT_PUBLIC_API_BASE_URL=http://localhost:9090/api/v1`. The backend allows `http://localhost:3000` by default (CORS). diff --git a/console-web/package-lock.json b/console-web/package-lock.json index cbd4043..ce5e634 100644 --- a/console-web/package-lock.json +++ b/console-web/package-lock.json @@ -39,7 +39,7 @@ "typescript": "^5" }, "engines": { - "node": ">=22" + "node": ">=20" } }, "node_modules/@alloc/quick-lru": { diff --git a/console-web/package.json b/console-web/package.json index dd6f585..c86b3f2 100755 --- a/console-web/package.json +++ b/console-web/package.json @@ -11,7 +11,9 @@ "dev": "next dev", "build": "next build", "start": "next start", - "lint": "eslint" + "lint": "eslint .", + "format": "prettier --write \"**/*.{ts,tsx,js,jsx,json,css,md}\"", + "format:check": "prettier --check \"**/*.{ts,tsx,js,jsx,json,css,md}\"" }, "engines": { "node": ">=20" diff --git a/deploy/kind/kind-rustfs-cluster.yaml b/deploy/kind/kind-rustfs-cluster.yaml index f222082..d517890 100644 --- a/deploy/kind/kind-rustfs-cluster.yaml +++ b/deploy/kind/kind-rustfs-cluster.yaml @@ -4,7 +4,7 @@ kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: rustfs-cluster nodes: - # Control plane node (无 extraPortMappings,避免与主机 80/443 冲突,访问通过 kubectl port-forward) + # Control plane: no extraPortMappings (avoids host 80/443 conflicts; use kubectl port-forward) - role: control-plane kubeadmConfigPatches: - | diff --git a/docs/DEVELOPMENT-NOTES.md b/docs/DEVELOPMENT-NOTES.md index 395af6f..77a3e8b 100755 --- a/docs/DEVELOPMENT-NOTES.md +++ b/docs/DEVELOPMENT-NOTES.md @@ -1,13 +1,21 @@ # Development Notes +> **Scope**: This file records **historical analysis sessions and design notes**. It is **not** the canonical development guide. For setup and quality gates, use [`DEVELOPMENT.md`](./DEVELOPMENT.md) and [`CONTRIBUTING.md`](../CONTRIBUTING.md). For current ports and operator behavior, see [`CLAUDE.md`](../CLAUDE.md) and the source tree. + +**Port terminology (do not confuse):** + +- **RustFS inside a Tenant** (Services created by the operator): S3 API **9000**, RustFS Console UI **9001** (see `src/types/v1alpha1/tenant/services.rs`). +- **Operator HTTP Console** (`cargo run -- console`, default **9090**): separate management API for the operator itself, not the same as the Tenant’s `{tenant}-console` Service. + ## Analysis Sessions ### Initial Bug Analysis (2025-11-05) See [CHANGELOG.md](../CHANGELOG.md) for complete list of bugs found and fixed. -**Key Discovery**: Through comprehensive analysis of RustFS source code, found 5 critical bugs: -- Wrong ports (console: 9090, IO: 90) +**Key Discovery** (historical—**since fixed** in this repo): Through analysis of RustFS source and early operator output, several mismatches were found versus RustFS defaults, including: + +- Wrong **RustFS** service ports in older operator revisions (e.g. IO **90** instead of **9000**, console **9090** instead of **9001** for the in-cluster RustFS Console Service) - Missing environment variables - Non-standard volume paths @@ -119,7 +127,7 @@ All implementation decisions verified against official RustFS source code, not a - Test resource structure creation - Test field propagation (scheduling, RBAC, etc.) - Test edge cases (None values, overrides) -- Currently: 25 tests, all passing +- Currently: 47 library unit tests (run `cargo test --all` for the exact count), all passing ### Integration Tests (Future) @@ -219,6 +227,4 @@ Always set in operator (users don't need to): --- -**Last Updated**: 2025-11-08 - -[[Index|← Back to Index]] +**Last Updated**: 2026-03-28 diff --git a/docs/DEVELOPMENT.md b/docs/DEVELOPMENT.md index 55386aa..c44b874 100755 --- a/docs/DEVELOPMENT.md +++ b/docs/DEVELOPMENT.md @@ -2,6 +2,16 @@ This guide will help you set up a local development environment for the RustFS Kubernetes Operator. +## Documentation map + +- **Code quality and PR gates** (format, clippy, tests, console lint) are defined in [`CONTRIBUTING.md`](../CONTRIBUTING.md) and enforced by [`Makefile`](../Makefile). Run **`make pre-commit`** from the repo root before opening a PR. + +- **`just` vs `make`**: The [`Justfile`](../Justfile) provides optional tasks (`just pre-commit` runs `fmt` + clippy + `cargo check` + `cargo nextest`; it does **not** run `console-web` checks). For parity with [`CONTRIBUTING.md`](../CONTRIBUTING.md) and [`Makefile`](../Makefile), prefer **`make pre-commit`**. + +- **This guide** focuses on toolchain setup, clusters, and day-to-day workflows—not on duplicating the full command matrix from CONTRIBUTING. + +- **[`DEVELOPMENT-NOTES.md`](./DEVELOPMENT-NOTES.md)** records past analysis sessions; it is **not** a substitute for CONTRIBUTING or this file. + --- ## 📋 Prerequisites @@ -146,8 +156,8 @@ You can run it directly: # Format code before building just fmt && just build -# Run all checks before building -just pre-commit && just build MODE=release +# Run all checks before building (use make for full gate including console-web) +make pre-commit && just build MODE=release # Clean and rebuild cargo clean && cargo build --release @@ -412,7 +422,7 @@ kubectl get nodes ```bash # Set log level (optional) export RUST_LOG=debug -export RUST_LOG=rustfs_operator=debug,kube=info +export RUST_LOG=operator=debug,kube=info # Run operator in debug mode cargo run -- server @@ -483,7 +493,7 @@ kubectl get pvc -l rustfs.tenant=dev-minimal ```bash # Set detailed log levels export RUST_LOG=debug -export RUST_LOG=rustfs_operator=debug,kube=info,tracing=debug +export RUST_LOG=operator=debug,kube=info,tracing=debug # Run operator cargo run -- server @@ -596,17 +606,17 @@ The operator uses the `tracing` crate for structured logging. Log levels: export RUST_LOG=debug # Set per-module log levels -export RUST_LOG=rustfs_operator=debug,kube=info,tracing=warn +export RUST_LOG=operator=debug,kube=info,tracing=warn # Common configurations: # Development -export RUST_LOG=rustfs_operator=debug,kube=info +export RUST_LOG=operator=debug,kube=info # Production -export RUST_LOG=rustfs_operator=info,kube=warn +export RUST_LOG=operator=info,kube=warn # Troubleshooting -export RUST_LOG=rustfs_operator=trace,kube=debug +export RUST_LOG=operator=trace,kube=debug ``` #### Log Location @@ -742,12 +752,8 @@ cargo test -- --test-threads=1 4. **Run checks** ```bash - just pre-commit - # This runs: - # - fmt-check - # - clippy - # - check - # - test + make pre-commit + # For optional Just tasks instead, see "Documentation map" — `just pre-commit` differs (no console-web). ``` 5. **Run tests** @@ -778,14 +784,14 @@ cargo test -- --test-threads=1 The project enforces strict code quality standards: ```bash -# Run all checks -just pre-commit +# Run all checks (Rust + console-web; matches CONTRIBUTING / Makefile) +make pre-commit -# Run individual checks +# Optional: Justfile tasks (no console-web in `just pre-commit`) just fmt-check # Check formatting just clippy # Code linting just check # Compilation check -just test # Tests +just test # Tests (cargo nextest) ``` **Note**: The project has `deny`-level clippy rules: @@ -840,7 +846,7 @@ cargo clean && cargo build **Solution**: ```bash # Navigate to project directory, rustup will auto-install correct toolchain -cd /Users/hongwei/my/operator +cd /path/to/operator rustup show ``` diff --git a/examples/README.md b/examples/README.md index 84ecdcb..3594e7e 100755 --- a/examples/README.md +++ b/examples/README.md @@ -26,7 +26,8 @@ This directory contains example Tenant configurations for the RustFS Kubernetes **Important Notes:** - RustFS S3 API runs on port **9000** -- RustFS Console UI runs on port **9001** +- RustFS Console UI (per Tenant Service `{tenant}-console`) runs on port **9001** +- **Operator HTTP Console** (`cargo run -- console`, default **9090**) is the operator’s own management API—**not** the same as the RustFS Console UI Service above - **Credentials**: Use Secrets for production (see `secret-credentials-tenant.yaml`) - Default dev credentials: `rustfsadmin` / `rustfsadmin` ⚠️ **Change for production!** - Operator automatically sets: `RUSTFS_VOLUMES`, `RUSTFS_ADDRESS`, `RUSTFS_CONSOLE_ADDRESS`, `RUSTFS_CONSOLE_ENABLE` @@ -440,9 +441,11 @@ When you apply a Tenant, the operator creates: - RoleBinding 2. **Services** - - IO Service: `rustfs` (port 90→9000) - - Console Service: `{tenant}-console` (port 9090) - - Headless Service: `{tenant}-hl` (for StatefulSet DNS) + - IO Service: `rustfs` (S3 API, port **9000**) + - Console Service: `{tenant}-console` (RustFS web UI, port **9001**) + - Headless Service: `{tenant}-hl` (for StatefulSet DNS, port **9000**) + + **Note:** The operator process also exposes an optional **Operator HTTP Console** for cluster management (default **9090** when using `cargo run -- console` or the chart’s console Deployment). That is separate from the Tenant’s `{tenant}-console` Service (**9001**). 3. **StatefulSets** (one per pool) - Volume Claim Templates: `vol-0`, `vol-1`, ... diff --git a/examples/tenant-4nodes.yaml b/examples/tenant-4nodes.yaml index 8c4b919..9dd70ba 100644 --- a/examples/tenant-4nodes.yaml +++ b/examples/tenant-4nodes.yaml @@ -61,8 +61,8 @@ spec: value: info --- -# Credentials Secret for RustFS (至少 8 字符) -# 用户名: admin123, 密码: admin12345 +# Credentials Secret for RustFS (keys must be at least 8 characters) +# Example: accesskey admin123, secretkey admin12345 (dev only) apiVersion: v1 kind: Secret metadata: diff --git a/scripts/README.md b/scripts/README.md index f8313b3..91e71d7 100644 --- a/scripts/README.md +++ b/scripts/README.md @@ -1,49 +1,48 @@ -# Scripts 脚本目录 +# Scripts -本目录包含 RustFS Operator 的部署、清理与检查脚本,按用途归类。 +Shell scripts for deploying, cleaning up, and checking RustFS Operator resources, grouped by purpose. -## 目录结构 +## Layout ``` scripts/ -├── README.md # 本说明 -├── deploy/ # 部署脚本 -│ ├── deploy-rustfs.sh # Kind 单节点 + 简单 Tenant 一键部署 -│ └── deploy-rustfs-4node.sh # Kind 4 节点 + 4 节点 Tenant 部署 -├── cleanup/ # 清理脚本 -│ ├── cleanup-rustfs.sh # 清理 deploy-rustfs.sh 创建的资源 -│ └── cleanup-rustfs-4node.sh # 清理 deploy-rustfs-4node.sh 创建的资源 -├── check/ # 检查脚本 -│ └── check-rustfs.sh # 集群/Tenant 状态与访问信息 -└── test/ # 脚本自检 - └── script-test.sh # 校验各脚本语法 +├── README.md # This file +├── deploy/ # Deploy scripts +│ ├── deploy-rustfs.sh # Kind single-node + simple Tenant +│ └── deploy-rustfs-4node.sh # Kind 4-node + 4-node Tenant +├── cleanup/ # Cleanup scripts +│ ├── cleanup-rustfs.sh # Undo resources created by deploy-rustfs.sh +│ └── cleanup-rustfs-4node.sh # Undo resources created by deploy-rustfs-4node.sh +├── check/ # Check scripts +│ └── check-rustfs.sh # Cluster / Tenant status and access hints +└── test/ # Script self-check + └── script-test.sh # Shell syntax check for all scripts ``` -## 使用方式 +## Usage -**建议在项目根目录执行**(脚本内部会自动 `cd` 到项目根,因此从任意目录执行也可): +**Run from the repository root** (recommended). Scripts `cd` to the project root internally, so they also work if invoked from another working directory: ```bash -# 从项目根执行 ./scripts/deploy/deploy-rustfs.sh ./scripts/cleanup/cleanup-rustfs.sh ./scripts/check/check-rustfs.sh -# 4 节点部署与清理 +# 4-node deploy and cleanup ./scripts/deploy/deploy-rustfs-4node.sh ./scripts/cleanup/cleanup-rustfs-4node.sh -# 校验所有脚本语法 +# Validate shell syntax for all scripts ./scripts/test/script-test.sh ``` -## 依赖的路径约定 +## Path conventions -- 脚本依赖项目根目录下的 `deploy/`、`examples/`、`console-web/` 等路径。 -- Kind 4 节点配置:`deploy/kind/kind-rustfs-cluster.yaml`。 -- 脚本会先 `cd` 到项目根再执行,因此可从任意当前目录调用。 +- Scripts expect paths under the repo root: `deploy/`, `examples/`, `console-web/`, etc. +- Kind 4-node config: `deploy/kind/kind-rustfs-cluster.yaml`. +- Each script switches to the project root before running. -## 相关文档 +## Related docs -- 部署说明:[deploy/README.md](../deploy/README.md) -- Tenant 示例:[examples/README.md](../examples/README.md) +- Deployment: [deploy/README.md](../deploy/README.md) +- Tenant examples: [examples/README.md](../examples/README.md) diff --git a/scripts/check/check-rustfs.sh b/scripts/check/check-rustfs.sh index afd7b21..c6451b0 100755 --- a/scripts/check/check-rustfs.sh +++ b/scripts/check/check-rustfs.sh @@ -18,7 +18,7 @@ set -e -# 保证从项目根目录执行(可从任意位置调用本脚本) +# Always run from project root (script cds here; safe to invoke from any cwd) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" cd "$PROJECT_ROOT" diff --git a/scripts/cleanup/cleanup-rustfs-4node.sh b/scripts/cleanup/cleanup-rustfs-4node.sh index 754d57c..fa0a058 100755 --- a/scripts/cleanup/cleanup-rustfs-4node.sh +++ b/scripts/cleanup/cleanup-rustfs-4node.sh @@ -14,16 +14,16 @@ # limitations under the License. ################################################################################ -# RustFS 4-node 部署环境清理脚本 +# Cleanup script for the RustFS 4-node demo environment # -# 清理: Tenants, Namespace, RBAC, CRD, Kind 集群, 本地存储目录 -# 与 deploy-rustfs-4node.sh 配套使用 +# Removes: Tenants, Namespace, RBAC, CRD, Kind cluster, local storage dirs +# Pair with: deploy-rustfs-4node.sh # ################################################################################ set -e -# 保证从项目根目录执行(可从任意位置调用本脚本) +# Always run from project root (script cds here; safe to invoke from any cwd) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" cd "$PROJECT_ROOT" @@ -31,7 +31,7 @@ cd "$PROJECT_ROOT" CLUSTER_NAME="rustfs-cluster" OPERATOR_NAMESPACE="rustfs-system" -# 颜色 +# Colors RED='\033[0;31m' GREEN='\033[0;32m' YELLOW='\033[1;33m' @@ -46,42 +46,42 @@ log_error() { echo -e "${RED}[ERROR]${NC} $1"; } confirm_cleanup() { if [ "$FORCE" != "true" ]; then echo "" - log_warning "将删除以下资源:" - echo " - 所有 Tenants" - echo " - 命名空间: ${OPERATOR_NAMESPACE}" + log_warning "The following will be deleted:" + echo " - All Tenants" + echo " - Namespace: ${OPERATOR_NAMESPACE}" echo " - ClusterRole / ClusterRoleBinding: rustfs-operator, rustfs-operator-console" echo " - CRD: tenants.rustfs.com" - echo " - Kind 集群: ${CLUSTER_NAME}" + echo " - Kind cluster: ${CLUSTER_NAME}" if [ "$CLEAN_STORAGE" = "true" ]; then - echo " - 主机存储目录: /tmp/rustfs-storage-{1,2,3}" + echo " - Host storage dirs: /tmp/rustfs-storage-{1,2,3}" fi echo "" - read -p "确认删除? (yes/no): " confirm + read -p "Confirm deletion? (yes/no): " confirm if [ "$confirm" != "yes" ]; then - log_info "已取消" + log_info "Cancelled" exit 0 fi fi } delete_all_tenants() { - log_info "删除所有 Tenants..." + log_info "Deleting all Tenants..." if ! kubectl get crd tenants.rustfs.com >/dev/null 2>&1; then - log_info "CRD 不存在,跳过" + log_info "CRD not found, skipping" return 0 fi local tenants tenants=$(kubectl get tenants --all-namespaces -o name 2>/dev/null) || true if [ -z "$tenants" ]; then - log_info "无 Tenant,跳过" + log_info "No Tenants, skipping" return 0 fi echo "$tenants" | while read -r line; do [ -z "$line" ] && continue - log_info "删除 $line..." + log_info "Deleting $line..." kubectl delete "$line" --timeout=60s 2>/dev/null || kubectl delete "$line" --force --grace-period=0 2>/dev/null || true done @@ -92,56 +92,56 @@ delete_all_tenants() { count=$(kubectl get tenants --all-namespaces -o name 2>/dev/null | wc -l) count=$((count + 0)) if [ "$count" -eq 0 ]; then - log_success "Tenants 已删除" + log_success "Tenants deleted" return 0 fi sleep 3 elapsed=$((elapsed + 3)) done - log_warning "部分 Tenant 可能仍在终止中" + log_warning "Some Tenants may still be terminating" } delete_namespace() { - log_info "删除命名空间 ${OPERATOR_NAMESPACE}..." + log_info "Deleting namespace ${OPERATOR_NAMESPACE}..." if kubectl get namespace ${OPERATOR_NAMESPACE} >/dev/null 2>&1; then kubectl delete namespace ${OPERATOR_NAMESPACE} --timeout=120s - log_info "等待命名空间完全删除..." + log_info "Waiting for namespace to be fully removed..." local timeout=120 local elapsed=0 while kubectl get namespace ${OPERATOR_NAMESPACE} >/dev/null 2>&1; do if [ $elapsed -ge $timeout ]; then - log_warning "等待超时" + log_warning "Wait timed out" kubectl get namespace ${OPERATOR_NAMESPACE} -o json 2>/dev/null | \ jq '.spec.finalizers = []' 2>/dev/null | \ kubectl replace --raw /api/v1/namespaces/${OPERATOR_NAMESPACE}/finalize -f - 2>/dev/null || true break fi - echo -ne "${BLUE}[INFO]${NC} 等待命名空间删除... ${elapsed}s\r" + echo -ne "${BLUE}[INFO]${NC} Waiting for namespace deletion... ${elapsed}s\r" sleep 5 elapsed=$((elapsed + 5)) done echo "" - log_success "命名空间已删除" + log_success "Namespace removed" else - log_info "命名空间不存在,跳过" + log_info "Namespace does not exist, skipping" fi } delete_cluster_rbac() { - log_info "删除 ClusterRoleBinding 和 ClusterRole..." + log_info "Deleting ClusterRoleBinding and ClusterRole..." for name in rustfs-operator rustfs-operator-console; do kubectl delete clusterrolebinding "$name" --timeout=30s 2>/dev/null || true kubectl delete clusterrole "$name" --timeout=30s 2>/dev/null || true done - log_success "RBAC 已清理" + log_success "RBAC cleaned up" } delete_pv_and_storageclass() { - log_info "删除 PersistentVolumes 和 StorageClass..." + log_info "Deleting PersistentVolumes and StorageClass..." for i in $(seq 1 12); do kubectl delete pv rustfs-pv-${i} --timeout=30s 2>/dev/null || true @@ -149,11 +149,11 @@ delete_pv_and_storageclass() { kubectl delete storageclass local-storage --timeout=30s 2>/dev/null || true - log_success "PV 和 StorageClass 已清理" + log_success "PVs and StorageClass removed" } delete_crd() { - log_info "删除 CRD tenants.rustfs.com..." + log_info "Deleting CRD tenants.rustfs.com..." if kubectl get crd tenants.rustfs.com >/dev/null 2>&1; then kubectl delete crd tenants.rustfs.com --timeout=60s @@ -168,60 +168,60 @@ delete_crd() { sleep 2 elapsed=$((elapsed + 2)) done - log_success "CRD 已删除" + log_success "CRD deleted" else - log_info "CRD 不存在,跳过" + log_info "CRD not found, skipping" fi } delete_kind_cluster() { - log_info "删除 Kind 集群 ${CLUSTER_NAME}..." + log_info "Deleting Kind cluster ${CLUSTER_NAME}..." if ! command -v kind >/dev/null 2>&1; then - log_warning "未找到 kind,跳过" + log_warning "kind not found, skipping" return 0 fi if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then kind delete cluster --name ${CLUSTER_NAME} - log_success "Kind 集群已删除" + log_success "Kind cluster deleted" else - log_info "Kind 集群不存在,跳过" + log_info "Kind cluster does not exist, skipping" fi } cleanup_storage_dirs() { - log_info "清理主机存储目录..." + log_info "Cleaning host storage directories..." for dir in /tmp/rustfs-storage-1 /tmp/rustfs-storage-2 /tmp/rustfs-storage-3; do if [ -d "$dir" ]; then rm -rf "$dir" - log_info "已删除 $dir" + log_info "Removed $dir" fi done - log_success "存储目录已清理" + log_success "Storage directories cleaned" } cleanup_local_files() { - log_info "清理本地生成文件..." + log_info "Cleaning generated local files..." if [ -f "deploy/rustfs-operator/crds/tenant-crd.yaml" ]; then rm -f deploy/rustfs-operator/crds/tenant-crd.yaml - log_info "已删除 tenant-crd.yaml" + log_info "Removed tenant-crd.yaml" fi - log_success "本地文件已清理" + log_success "Local files cleaned" } show_next_steps() { echo "" - log_info "重新部署:" + log_info "Redeploy with:" echo " ./scripts/deploy/deploy-rustfs-4node.sh" echo "" } -# 解析参数 +# Parse arguments FORCE="false" CLEAN_STORAGE="false" while [[ $# -gt 0 ]]; do @@ -238,31 +238,31 @@ while [[ $# -gt 0 ]]; do echo "Usage: $0 [-f|--force] [-s|--clean-storage]" echo "" echo "Options:" - echo " -f, --force 跳过确认" - echo " -s, --clean-storage 同时删除主机目录 /tmp/rustfs-storage-{1,2,3}" - echo " -h, --help 显示帮助" + echo " -f, --force Skip confirmation" + echo " -s, --clean-storage Also remove host dirs /tmp/rustfs-storage-{1,2,3}" + echo " -h, --help Show this help" exit 0 ;; *) - log_error "未知参数: $1" + log_error "Unknown argument: $1" exit 1 ;; esac done -trap 'log_error "清理被中断"; exit 1' INT +trap 'log_error "Cleanup interrupted"; exit 1' INT log_info "==========================================" -log_info " RustFS 4-node 环境清理" +log_info " RustFS 4-node environment cleanup" log_info "==========================================" confirm_cleanup echo "" -log_info "开始清理..." +log_info "Starting cleanup..." echo "" -# 若集群存在且可连接,先清理 K8s 资源 +# If the cluster exists and is reachable, clean Kubernetes resources first if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then kubectl config use-context kind-${CLUSTER_NAME} 2>/dev/null || true if kubectl cluster-info >/dev/null 2>&1; then @@ -273,7 +273,7 @@ if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then delete_crd fi else - log_info "Kind 集群 ${CLUSTER_NAME} 不存在,跳过 K8s 资源清理" + log_info "Kind cluster ${CLUSTER_NAME} not found, skipping Kubernetes cleanup" fi cleanup_local_files @@ -287,5 +287,5 @@ echo "" show_next_steps log_success "==========================================" -log_success " 清理完成" +log_success " Cleanup finished" log_success "==========================================" diff --git a/scripts/cleanup/cleanup-rustfs.sh b/scripts/cleanup/cleanup-rustfs.sh index ff9b4ff..d8eaec3 100755 --- a/scripts/cleanup/cleanup-rustfs.sh +++ b/scripts/cleanup/cleanup-rustfs.sh @@ -18,7 +18,7 @@ set -e -# 保证从项目根目录执行(可从任意位置调用本脚本) +# Always run from project root (script cds here; safe to invoke from any cwd) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" cd "$PROJECT_ROOT" @@ -376,5 +376,5 @@ done # Catch Ctrl+C trap 'log_error "Cleanup interrupted"; exit 1' INT -# 执行主流程 +# Main entry main "$@" diff --git a/scripts/deploy/deploy-rustfs-4node.sh b/scripts/deploy/deploy-rustfs-4node.sh index d757bef..eb90e74 100755 --- a/scripts/deploy/deploy-rustfs-4node.sh +++ b/scripts/deploy/deploy-rustfs-4node.sh @@ -14,33 +14,33 @@ # limitations under the License. ################################################################################ -# RustFS Operator 4-node 一键部署脚本 +# One-shot deploy: RustFS Operator on a 4-node Kind cluster # -# 架构: Kind 多节点 (1 control-plane + 3 workers) + 4 节点 Tenant + 双 Console -# 与 MinIO deploy-minio-v5.sh 架构一致 +# Topology: multi-node Kind (1 control-plane + 3 workers) + 4-node Tenant + dual Console +# (Inspired by a similar MinIO multi-node demo layout.) # -# 功能: -# - 创建 Kind 集群 (kind-rustfs-cluster.yaml) -# - 创建 StorageClass 和 12 个 PersistentVolumes -# - 部署 RustFS Operator + Operator Console (API + Web) -# - 部署 4 节点 RustFS Tenant -# - 获取并显示访问信息 +# Steps: +# - Create Kind cluster (kind-rustfs-cluster.yaml) +# - Create StorageClass and 12 PersistentVolumes +# - Deploy RustFS Operator + Operator Console (API + Web) +# - Deploy 4-node RustFS Tenant +# - Print access information # -# 使用: -# ./deploy-rustfs-4node.sh +# Usage: +# ./scripts/deploy/deploy-rustfs-4node.sh # ################################################################################ set -e set -o pipefail -# 保证从项目根目录执行(可从任意位置调用本脚本) +# Always run from project root (script cds here; safe to invoke from any cwd) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" cd "$PROJECT_ROOT" ################################################################################ -# 颜色定义 +# Colors ################################################################################ RED='\033[0;31m' GREEN='\033[0;32m' @@ -50,7 +50,7 @@ CYAN='\033[0;36m' NC='\033[0m' ################################################################################ -# 配置变量 +# Configuration ################################################################################ CLUSTER_NAME="rustfs-cluster" OPERATOR_NAMESPACE="rustfs-system" @@ -61,7 +61,7 @@ WORKER_NODES=("${CLUSTER_NAME}-worker" "${CLUSTER_NAME}-worker2" "${CLUSTER_NAME RUSTFS_RUN_AS_UID=10001 ################################################################################ -# 日志函数 +# Logging ################################################################################ log_info() { echo -e "${BLUE}[INFO]${NC} $1" @@ -88,25 +88,25 @@ log_header() { log_step() { echo "" - log_info "步骤 $1: $2" + log_info "Step $1: $2" } ################################################################################ -# 错误处理 +# Error handling ################################################################################ trap 'error_handler $? $LINENO' ERR error_handler() { - log_error "脚本在第 $2 行失败,退出码: $1" - log_warning "部署失败,可运行 ./scripts/cleanup/cleanup-rustfs-4node.sh 清理环境" + log_error "Script failed at line $2 with exit code $1" + log_warning "Run ./scripts/cleanup/cleanup-rustfs-4node.sh to tear down" exit 1 } ################################################################################ -# 检查依赖工具 +# Dependencies ################################################################################ check_dependencies() { - log_step "0/12" "检查必要工具" + log_step "0/12" "Checking required tools" local missing_tools=() for cmd in kubectl kind docker cargo; do @@ -116,46 +116,46 @@ check_dependencies() { done if [ ${#missing_tools[@]} -ne 0 ]; then - log_error "缺少必要工具: ${missing_tools[*]}" - log_info "请先安装: kubectl, kind, docker, cargo (Rust)" + log_error "Missing tools: ${missing_tools[*]}" + log_info "Install: kubectl, kind, docker, cargo (Rust)" exit 1 fi - log_success "所有必要工具已安装" + log_success "All required tools are present" } ################################################################################ -# 修复 inotify 限制 (Kind 多节点常见问题) +# Fix inotify limits (common with Kind multi-node) ################################################################################ fix_inotify_limits() { if sudo sysctl -w fs.inotify.max_user_watches=524288 >/dev/null 2>&1 \ && sudo sysctl -w fs.inotify.max_user_instances=512 >/dev/null 2>&1; then - log_info "已应用 inotify 限制" + log_info "Applied inotify limits" else - log_warning "无法设置 inotify 限制 (可能需要 root)。若出现 'too many open files' 错误:" + log_warning "Could not set inotify limits (may need root). If you see 'too many open files':" echo " sudo sysctl fs.inotify.max_user_watches=524288" echo " sudo sysctl fs.inotify.max_user_instances=512" fi } ################################################################################ -# 创建 Kind 集群 +# Kind cluster ################################################################################ create_kind_cluster() { - log_step "1/12" "创建 Kind 集群" + log_step "1/12" "Creating Kind cluster" fix_inotify_limits if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then - log_warning "集群 ${CLUSTER_NAME} 已存在" - read -p "是否删除并重建? (y/n) " -n 1 -r + log_warning "Cluster ${CLUSTER_NAME} already exists" + read -p "Delete and recreate? (y/n) " -n 1 -r echo if [[ $REPLY =~ ^[Yy]$ ]]; then - log_info "删除现有集群..." + log_info "Deleting existing cluster..." kind delete cluster --name ${CLUSTER_NAME} - log_success "现有集群已删除" + log_success "Existing cluster removed" else - log_info "使用现有集群" + log_info "Using existing cluster" kubectl config use-context kind-${CLUSTER_NAME} >/dev/null return 0 fi @@ -163,48 +163,48 @@ create_kind_cluster() { local kind_config="${PROJECT_ROOT}/deploy/kind/kind-rustfs-cluster.yaml" if [ ! -f "$kind_config" ]; then - log_error "配置文件不存在: $kind_config" + log_error "Config file not found: $kind_config" exit 1 fi - log_info "创建新集群 (1 control-plane + 3 workers,约需几分钟)..." + log_info "Creating cluster (1 control-plane + 3 workers; may take a few minutes)..." kind create cluster --config "$kind_config" kubectl config use-context kind-${CLUSTER_NAME} >/dev/null - log_success "Kind 集群已创建" + log_success "Kind cluster created" } ################################################################################ -# 等待集群就绪 +# Wait for nodes ################################################################################ wait_cluster_ready() { - log_step "2/12" "等待集群节点就绪" + log_step "2/12" "Waiting for nodes to be Ready" - log_info "等待所有节点就绪 (超时 5 分钟)..." + log_info "Waiting for all nodes (timeout 5m)..." kubectl wait --for=condition=Ready nodes --all --timeout=300s - # 允许在 control-plane 上调度 (可选,4 个 Pod 分布在 3 个 worker 上即可) + # Optional: allow scheduling on control-plane (4 pods can run on 3 workers) kubectl taint nodes ${CLUSTER_NAME}-control-plane node-role.kubernetes.io/control-plane:NoSchedule- 2>/dev/null || true - log_success "所有节点已就绪" + log_success "All nodes are Ready" kubectl get nodes -o wide } ################################################################################ -# 创建存储目录 +# Storage dirs on host ################################################################################ create_storage_dirs() { - log_step "3/12" "创建本地存储目录" + log_step "3/12" "Creating local storage directories" mkdir -p /tmp/rustfs-storage-{1,2,3} - log_success "本地存储目录已创建" + log_success "Local storage directories created" } ################################################################################ -# 创建 StorageClass +# StorageClass ################################################################################ create_storage_class() { - log_step "4/12" "创建 StorageClass" + log_step "4/12" "Creating StorageClass" cat </dev/null || true docker exec ${node} chown -R ${RUSTFS_RUN_AS_UID}:${RUSTFS_RUN_AS_UID} /mnt/data/vol${i} 2>/dev/null || true done done - log_success "所有卷目录已创建并设置权限" + log_success "Volume directories created with permissions" } ################################################################################ -# 生成并部署 CRD +# CRD ################################################################################ deploy_crd() { - log_step "7/12" "部署 Tenant CRD" + log_step "7/12" "Deploying Tenant CRD" local crd_dir="deploy/rustfs-operator/crds" local crd_file="${crd_dir}/tenant-crd.yaml" mkdir -p "$crd_dir" - log_info "生成 CRD..." + log_info "Generating CRD..." cargo run --release -- crd -f "$crd_file" - log_info "应用 CRD..." + log_info "Applying CRD..." kubectl apply -f "$crd_file" - log_info "等待 CRD 就绪..." + log_info "Waiting for CRD to be established..." kubectl wait --for condition=established --timeout=60s crd/tenants.rustfs.com - log_success "CRD 已部署" + log_success "CRD deployed" } ################################################################################ -# 创建命名空间 +# Namespace ################################################################################ create_namespace() { - log_step "8/12" "创建命名空间" + log_step "8/12" "Creating namespace" if kubectl get namespace ${OPERATOR_NAMESPACE} &>/dev/null; then - log_warning "命名空间 ${OPERATOR_NAMESPACE} 已存在" + log_warning "Namespace ${OPERATOR_NAMESPACE} already exists" else kubectl create namespace ${OPERATOR_NAMESPACE} - log_success "命名空间已创建" + log_success "Namespace created" fi } ################################################################################ -# 构建并部署 Operator + Console +# Build and deploy Operator + Console ################################################################################ deploy_operator_and_console() { - log_step "9/12" "构建并部署 Operator + Console" + log_step "9/12" "Building and deploying Operator + Console" local image_name="rustfs/operator:dev" local console_web_image="rustfs/console-web:dev" - log_info "构建 Operator (release)..." + log_info "Building Operator (release)..." cargo build --release - log_info "构建 Operator Docker 镜像..." + log_info "Building Operator container image..." docker build --network=host --no-cache -t "$image_name" . || { - log_error "Operator 镜像构建失败" + log_error "Operator image build failed" exit 1 } - log_info "构建 Console Web 镜像..." + log_info "Building Console Web image..." docker build --network=host --no-cache \ -t "$console_web_image" \ -f console-web/Dockerfile \ console-web/ || { - log_error "Console Web 镜像构建失败" + log_error "Console Web image build failed" exit 1 } - log_info "加载镜像到 Kind 集群..." + log_info "Loading images into Kind..." kind load docker-image "$image_name" --name ${CLUSTER_NAME} || { - log_error "加载 Operator 镜像失败" + log_error "Failed to load Operator image into Kind" exit 1 } kind load docker-image "$console_web_image" --name ${CLUSTER_NAME} || { - log_error "加载 Console Web 镜像失败" + log_error "Failed to load Console Web image into Kind" exit 1 } - # 若存在 rustfs 服务端镜像,一并加载 + # Load RustFS server image if present locally if docker images --format '{{.Repository}}:{{.Tag}}' | grep -q '^rustfs/rustfs:latest$'; then - log_info "加载 RustFS 服务端镜像..." - kind load docker-image rustfs/rustfs:latest --name ${CLUSTER_NAME} 2>/dev/null || log_warning "rustfs/rustfs:latest 加载失败,Tenant 可能需从 registry 拉取" + log_info "Loading RustFS server image..." + kind load docker-image rustfs/rustfs:latest --name ${CLUSTER_NAME} 2>/dev/null || log_warning "Failed to load rustfs/rustfs:latest; Tenant may pull from registry" else - log_warning "未找到 rustfs/rustfs:latest 本地镜像,Tenant 将尝试从 registry 拉取" + log_warning "rustfs/rustfs:latest not found locally; Tenant will try to pull from registry" fi - log_info "创建 Console JWT Secret..." + log_info "Creating Console JWT Secret..." local jwt_secret jwt_secret=$(openssl rand -base64 32 2>/dev/null || head -c 32 /dev/urandom | base64) kubectl create secret generic rustfs-operator-console-secret \ @@ -364,7 +364,7 @@ deploy_operator_and_console() { --from-literal=jwt-secret="$jwt_secret" \ --dry-run=client -o yaml | kubectl apply -f - - log_info "部署 Operator、Console API、Console Web..." + log_info "Deploying Operator, Console API, Console Web..." kubectl apply -f deploy/k8s-dev/operator-rbac.yaml kubectl apply -f deploy/k8s-dev/console-rbac.yaml kubectl apply -f deploy/k8s-dev/operator-deployment.yaml @@ -373,38 +373,38 @@ deploy_operator_and_console() { kubectl apply -f deploy/k8s-dev/console-frontend-deployment.yaml kubectl apply -f deploy/k8s-dev/console-frontend-service.yaml - log_info "等待 Operator 就绪 (超时 5 分钟)..." + log_info "Waiting for Operator (timeout 5m)..." kubectl wait --for=condition=available --timeout=300s \ deployment/rustfs-operator -n ${OPERATOR_NAMESPACE} - log_info "等待 Operator Console 就绪..." + log_info "Waiting for Operator Console..." kubectl wait --for=condition=available --timeout=300s \ deployment/rustfs-operator-console -n ${OPERATOR_NAMESPACE} - log_info "等待 Console Web 就绪..." + log_info "Waiting for Console Web..." kubectl wait --for=condition=available --timeout=300s \ deployment/rustfs-operator-console-frontend -n ${OPERATOR_NAMESPACE} - log_success "Operator 和 Console 已部署" + log_success "Operator and Console deployed" kubectl get pods -n ${OPERATOR_NAMESPACE} } ################################################################################ -# 部署 Tenant (4 节点) +# Tenant (4 nodes) ################################################################################ deploy_tenant() { - log_step "10/12" "部署 RustFS Tenant (4 节点)" + log_step "10/12" "Deploying RustFS Tenant (4 nodes)" if [ ! -f "examples/tenant-4nodes.yaml" ]; then - log_error "配置文件 examples/tenant-4nodes.yaml 不存在" + log_error "File not found: examples/tenant-4nodes.yaml" exit 1 fi kubectl apply -f examples/tenant-4nodes.yaml - log_success "Tenant 已提交" + log_success "Tenant applied" - log_info "等待 Tenant Pods 启动 (约需几分钟)..." + log_info "Waiting for Tenant pods (may take a few minutes)..." sleep 15 local max_attempts=60 @@ -420,17 +420,17 @@ deploy_tenant() { --no-headers 2>/dev/null | wc -l | tr -d ' ') if [ "$ready_pods" -ge "$expected_pods" ]; then - log_success "Tenant Pods 已启动 ($ready_pods/$expected_pods Running)" + log_success "Tenant pods running ($ready_pods/$expected_pods Running)" break fi - log_info "等待 Pods 启动... ($ready_pods/$expected_pods ready)" + log_info "Waiting for pods... ($ready_pods/$expected_pods ready)" sleep 5 attempt=$((attempt + 1)) done if [ "$ready_pods" -lt "$expected_pods" ]; then - log_warning "部分 Pods 可能还在启动中 ($ready_pods/$expected_pods)" + log_warning "Some pods may still be starting ($ready_pods/$expected_pods)" fi kubectl get pods -n ${OPERATOR_NAMESPACE} -l rustfs.tenant=${TENANT_NAME} @@ -438,110 +438,108 @@ deploy_tenant() { } ################################################################################ -# 获取访问信息 +# Access info ################################################################################ get_access_info() { - log_step "11/12" "获取访问信息" + log_step "11/12" "Gathering access information" - # Operator Console Token - log_info "获取 Operator Console Token..." + log_info "Fetching Operator Console token..." if kubectl get secret rustfs-operator-console-secret -n ${OPERATOR_NAMESPACE} &>/dev/null; then OPERATOR_TOKEN=$(kubectl create token rustfs-operator -n ${OPERATOR_NAMESPACE} --duration=24h 2>/dev/null || echo "") if [ -n "$OPERATOR_TOKEN" ]; then echo "$OPERATOR_TOKEN" > /tmp/rustfs-operator-console-token.txt - log_success "Token 已保存到 /tmp/rustfs-operator-console-token.txt" + log_success "Token saved to /tmp/rustfs-operator-console-token.txt" fi fi - # Tenant 状态 if kubectl get tenant ${TENANT_NAME} -n ${OPERATOR_NAMESPACE} &>/dev/null; then TENANT_STATE=$(kubectl get tenant ${TENANT_NAME} -n ${OPERATOR_NAMESPACE} \ -o jsonpath='{.status.currentState}' 2>/dev/null || echo "Unknown") - log_info "Tenant 状态: ${TENANT_STATE}" + log_info "Tenant status: ${TENANT_STATE}" fi } ################################################################################ -# 显示部署摘要 +# Summary ################################################################################ show_summary() { - log_step "12/12" "部署摘要" + log_step "12/12" "Deployment summary" - log_header "部署完成" + log_header "Deployment complete" echo "" - echo -e "${BLUE}📊 集群信息${NC}" - echo " 集群名称: ${CLUSTER_NAME}" - echo " 节点数量: 4 (1 control-plane + 3 workers)" + echo -e "${BLUE}📊 Cluster${NC}" + echo " Name: ${CLUSTER_NAME}" + echo " Nodes: 4 (1 control-plane + 3 workers)" echo "" - echo -e "${BLUE}📦 已部署组件${NC}" + echo -e "${BLUE}📦 Deployed${NC}" echo " Operator + Console API + Console Web" echo " Tenant: ${TENANT_NAME} (4 servers, 2 volumes each)" echo "" echo -e "${GREEN}======================================${NC}" - echo -e "${GREEN}🚀 访问信息${NC}" + echo -e "${GREEN}🚀 Access${NC}" echo -e "${GREEN}======================================${NC}" echo "" - echo -e "${YELLOW}1. Operator Console Web (管理 Tenant)${NC}" - echo " 用途: 创建/删除/管理 Tenant" - echo -e " 访问: ${CYAN}http://localhost:8080${NC}" - echo " 认证: K8s Token (见下方)" + echo -e "${YELLOW}1. Operator Console Web (manage Tenants)${NC}" + echo " Use: create / delete / manage Tenants" + echo -e " URL: ${CYAN}http://localhost:8080${NC}" + echo " Auth: Kubernetes token (see below)" echo "" - echo " 启动端口转发:" + echo " Port-forward:" echo -e " ${BLUE}kubectl port-forward svc/rustfs-operator-console-frontend -n ${OPERATOR_NAMESPACE} 8080:80${NC}" echo "" - echo " 获取 Token:" + echo " Get token:" echo -e " ${BLUE}kubectl create token rustfs-operator -n ${OPERATOR_NAMESPACE} --duration=24h${NC}" echo "" - echo -e "${YELLOW}2. Tenant Console (管理数据)${NC}" - echo " 用途: 上传/下载文件,管理 Buckets" - echo -e " 访问: ${CYAN}http://localhost:9001${NC}" - echo -e " 用户名: ${GREEN}admin123${NC}" - echo -e " 密码: ${GREEN}admin12345${NC}" + echo -e "${YELLOW}2. Tenant Console (RustFS UI)${NC}" + echo " Use: upload/download, buckets" + echo -e " URL: ${CYAN}http://localhost:9001${NC}" + echo -e " Username: ${GREEN}admin123${NC}" + echo -e " Password: ${GREEN}admin12345${NC}" echo "" - echo " 启动端口转发:" + echo " Port-forward:" echo -e " ${BLUE}kubectl port-forward svc/${TENANT_NAME}-console -n ${OPERATOR_NAMESPACE} 9001:9001${NC}" echo "" echo -e "${YELLOW}3. RustFS S3 API${NC}" - echo -e " 访问: ${CYAN}http://localhost:9000${NC}" + echo -e " URL: ${CYAN}http://localhost:9000${NC}" echo -e " Access Key: ${GREEN}admin123${NC}" echo -e " Secret Key: ${GREEN}admin12345${NC}" echo "" - echo " 启动端口转发:" + echo " Port-forward:" echo -e " ${BLUE}kubectl port-forward svc/${TENANT_NAME}-io -n ${OPERATOR_NAMESPACE} 9000:9000${NC}" echo "" echo -e "${GREEN}======================================${NC}" - echo -e "${GREEN}📝 常用命令${NC}" + echo -e "${GREEN}📝 Useful commands${NC}" echo -e "${GREEN}======================================${NC}" echo "" - echo "查看资源:" + echo "Resources:" echo -e " ${BLUE}kubectl get all -n ${OPERATOR_NAMESPACE}${NC}" echo -e " ${BLUE}kubectl get tenant -n ${OPERATOR_NAMESPACE}${NC}" echo "" - echo "查看日志:" + echo "Logs:" echo -e " ${BLUE}kubectl logs -f deployment/rustfs-operator -n ${OPERATOR_NAMESPACE}${NC}" echo -e " ${BLUE}kubectl logs -f ${TENANT_NAME}-primary-0 -n ${OPERATOR_NAMESPACE}${NC}" echo "" - echo "销毁环境:" + echo "Tear down:" echo -e " ${RED}./scripts/cleanup/cleanup-rustfs-4node.sh${NC}" echo "" - log_success "部署完成,可访问 Operator Console 和 Tenant Console" + log_success "Done. Use Operator Console and Tenant Console as above." echo "" } ################################################################################ -# 主函数 +# Main ################################################################################ main() { - log_header "RustFS Operator 4-node 一键部署" - log_info "架构: Kind 多节点 + 4 节点 Tenant + 双 Console" + log_header "RustFS Operator 4-node deploy" + log_info "Topology: Kind multi-node + 4-node Tenant + dual Console" echo "" check_dependencies @@ -559,16 +557,15 @@ main() { show_summary } -# 解析参数 case "${1:-}" in -h|--help) echo "Usage: $0" echo "" - echo "RustFS Operator 4-node 一键部署 (Kind 多节点 + 4 节点 Tenant + 双 Console)" + echo "RustFS Operator 4-node demo (Kind multi-node + 4-node Tenant + dual Console)" echo "" - echo "依赖: kubectl, kind, docker, cargo (Rust)" + echo "Requires: kubectl, kind, docker, cargo (Rust)" echo "" - echo "清理: ./scripts/cleanup/cleanup-rustfs-4node.sh" + echo "Cleanup: ./scripts/cleanup/cleanup-rustfs-4node.sh" exit 0 ;; esac diff --git a/scripts/deploy/deploy-rustfs.sh b/scripts/deploy/deploy-rustfs.sh index 32020ed..07573d7 100755 --- a/scripts/deploy/deploy-rustfs.sh +++ b/scripts/deploy/deploy-rustfs.sh @@ -19,7 +19,7 @@ set -e -# 保证从项目根目录执行(可从任意位置调用本脚本) +# Always run from project root (script cds here; safe to invoke from any cwd) SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" PROJECT_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)" cd "$PROJECT_ROOT" @@ -429,5 +429,5 @@ case "${1:-}" in ;; esac -# 执行主流程 +# Main entry main "$@" diff --git a/src/console/error.rs b/src/console/error.rs index 0304532..5a33d47 100755 --- a/src/console/error.rs +++ b/src/console/error.rs @@ -20,7 +20,7 @@ use axum::{ use serde::Serialize; use snafu::Snafu; -/// Console API 错误类型 +/// Console HTTP API error type #[derive(Debug, Snafu)] #[snafu(visibility(pub))] pub enum Error { @@ -52,7 +52,7 @@ pub enum Error { Json { source: serde_json::Error }, } -/// 将 kube::Error 映射为合适的 Console Error(404 -> NotFound, 409 -> Conflict) +/// Map `kube::Error` to a console error (404 -> NotFound, 409 -> Conflict). pub fn map_kube_error(e: kube::Error, not_found_resource: impl Into) -> Error { match &e { kube::Error::Api(ae) if ae.code == 404 => Error::NotFound { @@ -65,7 +65,7 @@ pub fn map_kube_error(e: kube::Error, not_found_resource: impl Into) -> } } -/// API 错误响应格式 +/// JSON body shape for API errors #[derive(Serialize)] struct ErrorResponse { error: String, diff --git a/src/console/handlers/auth.rs b/src/console/handlers/auth.rs index 6097b94..76d3f14 100755 --- a/src/console/handlers/auth.rs +++ b/src/console/handlers/auth.rs @@ -24,7 +24,7 @@ use crate::console::{ }; use crate::types::v1alpha1::tenant::Tenant; -/// 登录处理 +/// Exchange a Kubernetes bearer token for a session cookie. // TOKEN=$(kubectl create token rustfs-operator -n rustfs-system --duration=24h) // curl -X POST http://localhost:9090/api/v1/login \ // -H "Content-Type: application/json" \ @@ -35,10 +35,10 @@ pub async fn login( ) -> Result { tracing::info!("Login attempt"); - // 验证 K8s Token (尝试创建客户端并测试权限) + // Validate the bearer token by building a client let client = create_k8s_client(&req.token).await?; - // 测试权限 - 尝试列出 Tenant (limit 1) + // Permission smoke test: list Tenant CRs (limit 1) let api: kube::Api = kube::Api::all(client); api.list(&kube::api::ListParams::default().limit(1)) .await @@ -49,7 +49,7 @@ pub async fn login( } })?; - // 生成 JWT + // Issue signed session JWT let claims = Claims::new(req.token); let token = encode( &Header::default(), @@ -58,7 +58,7 @@ pub async fn login( ) .context(error::JwtSnafu)?; - // 设置 HttpOnly Cookie + // HttpOnly session cookie let cookie = format!( "session={}; Path=/; HttpOnly; SameSite=Strict; Max-Age={}", token, @@ -76,9 +76,9 @@ pub async fn login( )) } -/// 登出处理 +/// Clear the session cookie. pub async fn logout() -> impl IntoResponse { - // 清除 Cookie + // Expire the session cookie let cookie = "session=; Path=/; HttpOnly; Max-Age=0"; let headers = [(header::SET_COOKIE, cookie)]; @@ -91,7 +91,7 @@ pub async fn logout() -> impl IntoResponse { ) } -/// 检查会话 +/// Return session validity and expiry from JWT claims. pub async fn session_check(Extension(claims): Extension) -> Json { let expires_at = chrono::DateTime::from_timestamp(claims.exp as i64, 0).map(|dt| dt.to_rfc3339()); @@ -102,16 +102,16 @@ pub async fn session_check(Extension(claims): Extension) -> Json Result { - // 使用默认配置加载 + // Default kubeconfig (in-cluster or KUBECONFIG) let mut config = kube::Config::infer() .await .map_err(|e| Error::InternalServer { message: format!("Failed to load kubeconfig: {}", e), })?; - // 覆盖 token + // Replace auth with the user's token config.auth_info.token = Some(token.to_string().into()); Client::try_from(config).map_err(|e| Error::InternalServer { diff --git a/src/console/handlers/cluster.rs b/src/console/handlers/cluster.rs index 6fb3fa7..7ac4103 100755 --- a/src/console/handlers/cluster.rs +++ b/src/console/handlers/cluster.rs @@ -21,7 +21,7 @@ use axum::{Extension, Json}; use k8s_openapi::api::core::v1 as corev1; use kube::{Api, Client, ResourceExt, api::ListParams}; -/// 列出所有节点 +/// List all nodes with capacity/allocatable strings. pub async fn list_nodes(Extension(claims): Extension) -> Result> { let client = create_client(&claims).await?; let api: Api = Api::all(client); @@ -114,7 +114,7 @@ pub async fn list_nodes(Extension(claims): Extension) -> Result, ) -> Result> { @@ -143,7 +143,7 @@ pub async fn list_namespaces( Ok(Json(NamespaceListResponse { namespaces: items })) } -/// 创建 Namespace +/// Create a namespace by name. pub async fn create_namespace( Extension(claims): Extension, Json(req): Json, @@ -178,7 +178,7 @@ pub async fn create_namespace( })) } -/// 获取集群资源摘要 +/// Sum CPU/memory across all nodes (capacity vs allocatable). pub async fn get_cluster_resources( Extension(claims): Extension, ) -> Result> { @@ -192,7 +192,7 @@ pub async fn get_cluster_resources( let total_nodes = nodes.items.len(); - // 累加所有节点的 capacity 与 allocatable(数值累加后格式化) + // Sum each node's capacity/allocatable, then format let (total_cpu_millicores, total_memory_bytes, alloc_cpu_millicores, alloc_memory_bytes) = nodes.items.iter().fold( (0i64, 0i64, 0i64, 0i64), @@ -248,8 +248,8 @@ pub async fn get_cluster_resources( })) } -/// 将 Kubernetes CPU Quantity 解析为毫核 (millicores)。 -/// 支持 "1"(核)、"1000m"、"500m" 等格式。 +/// Parse a Kubernetes CPU quantity to millicores. +/// Accepts whole cores (`1`), millicores (`500m`, `1000m`), nano (`n`), micro (`u`). pub(crate) fn parse_cpu_to_millicores(s: &str) -> i64 { let s = s.trim(); if s.is_empty() { @@ -276,7 +276,7 @@ pub(crate) fn parse_cpu_to_millicores(s: &str) -> i64 { 0 } -/// 将毫核格式化为 CPU 字符串(如 "8" 或 "500m")。 +/// Format millicores as a Kubernetes-style CPU string (e.g. `8` or `500m`). pub(crate) fn format_cpu_from_millicores(m: i64) -> String { if m == 0 { return "0".to_string(); @@ -288,8 +288,8 @@ pub(crate) fn format_cpu_from_millicores(m: i64) -> String { } } -/// 将 Kubernetes Memory Quantity 解析为字节。 -/// 支持 "1Gi"、"1G"、"1024Mi"、"1Ki" 等格式。 +/// Parse a Kubernetes memory quantity to bytes. +/// Supports binary (Gi, Mi, Ki, …) and decimal (G, M, k, …) suffixes. pub(crate) fn parse_memory_to_bytes(s: &str) -> i64 { let s = s.trim(); if s.is_empty() { @@ -326,7 +326,7 @@ pub(crate) fn parse_memory_to_bytes(s: &str) -> i64 { (n * multiplier as f64) as i64 } -/// 将字节格式化为可读内存字符串(优先 Gi)。 +/// Format bytes as a compact memory string (prefer Gi). pub(crate) fn format_memory_from_bytes(b: i64) -> String { const GIB: i64 = 1024 * 1024 * 1024; const MIB: i64 = 1024 * 1024; @@ -349,7 +349,7 @@ pub(crate) fn format_memory_from_bytes(b: i64) -> String { } } -/// 创建 Kubernetes 客户端 +/// Build a client using the session bearer token. async fn create_client(claims: &Claims) -> Result { let mut config = kube::Config::infer() .await diff --git a/src/console/handlers/events.rs b/src/console/handlers/events.rs index e98c97b..5a88eb7 100755 --- a/src/console/handlers/events.rs +++ b/src/console/handlers/events.rs @@ -21,8 +21,10 @@ use axum::{Extension, Json, extract::Path}; use k8s_openapi::api::core::v1 as corev1; use kube::{Api, Client, api::ListParams}; -/// 列出 Tenant 相关的 Events。 -/// 若 K8s API 失败(权限、field selector 等),返回空列表并打日志,避免 500 导致详情页整页失败。 +/// List Kubernetes events for objects named like the tenant. +/// +/// On list failure (RBAC, field selector, etc.) returns an empty list and logs a warning so the +/// tenant detail page does not 500. pub async fn list_tenant_events( Path((namespace, tenant)): Path<(String, String)>, Extension(claims): Extension, @@ -71,7 +73,7 @@ pub async fn list_tenant_events( Ok(Json(EventListResponse { events: items })) } -/// 创建 Kubernetes 客户端 +/// Build a client impersonating the session token. async fn create_client(claims: &Claims) -> Result { let mut config = kube::Config::infer() .await diff --git a/src/console/handlers/pods.rs b/src/console/handlers/pods.rs index e23db9c..9ace612 100755 --- a/src/console/handlers/pods.rs +++ b/src/console/handlers/pods.rs @@ -30,7 +30,7 @@ use kube::{ api::{DeleteParams, ListParams, LogParams}, }; -/// 校验 Pod 是否属于指定 Tenant(通过 rustfs.tenant 标签) +/// Ensure the pod has `rustfs.tenant=` label. fn ensure_pod_belongs_to_tenant( pod: &corev1::Pod, tenant_name: &str, @@ -49,7 +49,7 @@ fn ensure_pod_belongs_to_tenant( Ok(()) } -/// 列出 Tenant 的所有 Pods +/// List pods labeled for this tenant. pub async fn list_pods( Path((namespace, tenant_name)): Path<(String, String)>, Extension(claims): Extension, @@ -57,7 +57,7 @@ pub async fn list_pods( let client = create_client(&claims).await?; let api: Api = Api::namespaced(client, &namespace); - // 查询带有 Tenant 标签的 Pods + // List pods with tenant label let pods = api .list(&ListParams::default().labels(&format!("rustfs.tenant={}", tenant_name))) .await @@ -70,7 +70,7 @@ pub async fn list_pods( let status = pod.status.as_ref(); let spec = pod.spec.as_ref(); - // 提取 Pool 名称(从 Pod 名称中解析) + // Derive pool name from pod name prefix let pool = pod .metadata .labels @@ -79,12 +79,12 @@ pub async fn list_pods( .cloned() .unwrap_or_else(|| "unknown".to_string()); - // Pod 阶段 + // Phase let phase = status .and_then(|s| s.phase.clone()) .unwrap_or_else(|| "Unknown".to_string()); - // 整体状态 + // Aggregate status string let pod_status = if let Some(status) = status { if let Some(conditions) = &status.conditions { if conditions @@ -102,10 +102,10 @@ pub async fn list_pods( "Unknown" }; - // 节点名称 + // Node let node = spec.and_then(|s| s.node_name.clone()); - // 容器就绪状态 + // Ready count x/y let (ready_count, total_count) = if let Some(status) = status { let total = status .container_statuses @@ -122,13 +122,13 @@ pub async fn list_pods( (0, 0) }; - // 重启次数 + // Restart count let restarts = status .and_then(|s| s.container_statuses.as_ref()) .map(|containers| containers.iter().map(|c| c.restart_count).sum::()) .unwrap_or(0); - // 创建时间和 Age + // Created timestamp and age let created_at = pod .metadata .creation_timestamp @@ -161,7 +161,7 @@ pub async fn list_pods( Ok(Json(PodListResponse { pods: pod_list })) } -/// 删除 Pod +/// Delete a pod (evict). pub async fn delete_pod( Path((namespace, tenant_name, pod_name)): Path<(String, String, String)>, Extension(claims): Extension, @@ -188,7 +188,7 @@ pub async fn delete_pod( })) } -/// 重启 Pod(通过删除实现) +/// Restart by deleting the pod (StatefulSet recreates it). pub async fn restart_pod( Path((namespace, tenant_name, pod_name)): Path<(String, String, String)>, Extension(claims): Extension, @@ -203,7 +203,7 @@ pub async fn restart_pod( .map_err(|e| error::map_kube_error(e, format!("Pod '{}'", pod_name)))?; ensure_pod_belongs_to_tenant(&pod, &tenant_name, &pod_name)?; - // 删除 Pod,StatefulSet 控制器会自动重建 + // Delete; StatefulSet controller recreates the pod let delete_params = if req.force { DeleteParams { grace_period_seconds: Some(0), @@ -226,7 +226,7 @@ pub async fn restart_pod( })) } -/// 获取 Pod 详情 +/// Full pod detail for the UI. pub async fn get_pod_details( Path((namespace, tenant_name, pod_name)): Path<(String, String, String)>, Extension(claims): Extension, @@ -240,7 +240,7 @@ pub async fn get_pod_details( .map_err(|e| error::map_kube_error(e, format!("Pod '{}'", pod_name)))?; ensure_pod_belongs_to_tenant(&pod, &tenant_name, &pod_name)?; - // 提取详细信息 + // Map Kubernetes pod to DTO let pool = pod .metadata .labels @@ -252,7 +252,7 @@ pub async fn get_pod_details( let status_info = pod.status.as_ref(); let spec = pod.spec.as_ref(); - // 构建状态 + // Phase + conditions let status = PodStatus { phase: status_info .and_then(|s| s.phase.clone()) @@ -282,7 +282,7 @@ pub async fn get_pod_details( .map(|t| t.0.to_rfc3339()), }; - // 容器信息 + // Container statuses let containers = if let Some(container_statuses) = status_info.and_then(|s| s.container_statuses.as_ref()) { @@ -328,7 +328,7 @@ pub async fn get_pod_details( Vec::new() }; - // Volume 信息 + // Volume mounts let volumes = spec .and_then(|s| s.volumes.as_ref()) .map(|vols| { @@ -374,7 +374,7 @@ pub async fn get_pod_details( })) } -/// 获取 Pod 日志(流式传输) +/// Stream pod logs (`follow` supported). pub async fn get_pod_logs( Path((namespace, tenant_name, pod_name)): Path<(String, String, String)>, Query(query): Query, @@ -389,7 +389,7 @@ pub async fn get_pod_logs( .map_err(|e| error::map_kube_error(e, format!("Pod '{}'", pod_name)))?; ensure_pod_belongs_to_tenant(&pod, &tenant_name, &pod_name)?; - // 构建日志参数 + // Build `LogParams` let mut log_params = LogParams { container: query.container, follow: query.follow, @@ -398,7 +398,7 @@ pub async fn get_pod_logs( ..Default::default() }; - // since_time 校验:仅当时间不晚于当前时间时使用 + // Only honor `since_time` when not in the future if let Some(since_time) = &query.since_time && let Ok(dt) = chrono::DateTime::parse_from_rfc3339(since_time) { @@ -408,28 +408,27 @@ pub async fn get_pod_logs( if duration.num_seconds() >= 0 { log_params.since_seconds = Some(duration.num_seconds()); } - // 若 since_time 在未来,忽略该参数(不设置 since_seconds) + // Future timestamps are ignored (no since_seconds) } - // 获取日志流 + // Start log stream let log_stream = api .log_stream(&pod_name, &log_params) .await .map_err(|e| error::map_kube_error(e, format!("Pod '{}'", pod_name)))?; - // 将字节流转换为可用的 Body - // kube-rs 返回的是 impl AsyncBufRead,我们需要逐行读取并转换为字节流 + // Turn kube-rs `AsyncBufRead` log stream into an HTTP body use futures::io::AsyncBufReadExt; let lines = log_stream.lines(); - // 转换为字节流 + // AsyncRead -> Body stream let byte_stream = lines.map_ok(|line| format!("{}\n", line).into_bytes()); - // 返回流式响应 + // Chunked streaming response Ok(Body::from_stream(byte_stream).into_response()) } -/// 创建 Kubernetes 客户端 +/// Build a client using the session bearer token. async fn create_client(claims: &Claims) -> Result { let mut config = kube::Config::infer() .await @@ -444,7 +443,7 @@ async fn create_client(claims: &Claims) -> Result { }) } -/// 格式化时间间隔 +/// Human-readable age since `created_at`. fn format_duration(duration: chrono::Duration) -> String { let days = duration.num_days(); let hours = duration.num_hours() % 24; diff --git a/src/console/handlers/pools.rs b/src/console/handlers/pools.rs index 687832c..cba0bd7 100755 --- a/src/console/handlers/pools.rs +++ b/src/console/handlers/pools.rs @@ -28,7 +28,7 @@ use crate::types::v1alpha1::{ tenant::Tenant, }; -/// Kubernetes 资源名称校验(RFC 1123 子域名:小写字母数字、连字符,1-63 字符) +/// Validate a Kubernetes resource name (RFC 1123 subdomain: lowercase alphanumeric + hyphen, 1–63). fn is_valid_k8s_name(s: &str) -> bool { if s.is_empty() || s.len() > 63 { return false; @@ -48,7 +48,7 @@ fn is_valid_k8s_name(s: &str) -> bool { s.chars().last().is_some_and(|c| c != '-') } -/// Kubernetes Quantity 格式校验(如 10Gi、100M、1) +/// Loose validation for a Kubernetes resource quantity (e.g. `10Gi`, `100M`, `1`). fn is_valid_k8s_quantity(s: &str) -> bool { if s.is_empty() || s.len() > 32 { return false; @@ -57,7 +57,7 @@ fn is_valid_k8s_quantity(s: &str) -> bool { if s.is_empty() { return false; } - // 允许:纯数字、数字+小数、数字+后缀(E|P|T|G|M|K|Ei|Pi|Ti|Gi|Mi|Ki) + // Allow: integer, decimal, or decimal + SI/binary suffix let bytes = s.as_bytes(); let mut i = 0; while i < bytes.len() && (bytes[i] == b'.' || (bytes[i] as char).is_ascii_digit()) { @@ -69,8 +69,8 @@ fn is_valid_k8s_quantity(s: &str) -> bool { if i < bytes.len() { let suffix = std::str::from_utf8(&bytes[i..]).unwrap_or(""); const VALID: &[&str] = &[ - "Ei", "Pi", "Ti", "Gi", "Mi", "Ki", // 二进制后缀优先 - "E", "P", "T", "G", "M", "K", // 十进制后缀 + "Ei", "Pi", "Ti", "Gi", "Mi", "Ki", // binary SI + "E", "P", "T", "G", "M", "K", // decimal SI ]; if !VALID.contains(&suffix) { return false; @@ -79,7 +79,7 @@ fn is_valid_k8s_quantity(s: &str) -> bool { true } -/// 校验 Pool 卷数(与 CRD 一致:2 server 至少 4 卷,3 server 至少 6 卷,其余至少 4 卷) +/// Validate pool volume count (same rules as CRD: 2 servers => min 4 vols, 3 servers => min 6, else min 4). fn validate_pool_volumes(servers: i32, volumes_per_server: i32) -> Result { let total = servers * volumes_per_server; if servers <= 0 || volumes_per_server <= 0 { @@ -108,7 +108,7 @@ fn validate_pool_volumes(servers: i32, volumes_per_server: i32) -> Result { Ok(total) } -/// 列出 Tenant 的所有 Pools +/// List pools for a tenant (from spec + StatefulSet status). pub async fn list_pools( Path((namespace, tenant_name)): Path<(String, String)>, Extension(claims): Extension, @@ -116,13 +116,13 @@ pub async fn list_pools( let client = create_client(&claims).await?; let tenant_api: Api = Api::namespaced(client.clone(), &namespace); - // 获取 Tenant + // Load Tenant let tenant = tenant_api .get(&tenant_name) .await .map_err(|e| error::map_kube_error(e, format!("Tenant '{}'", tenant_name)))?; - // 获取所有 StatefulSets + // List StatefulSets in namespace let ss_api: Api = Api::namespaced(client, &namespace); let statefulsets = ss_api .list(&ListParams::default().labels(&format!("rustfs.tenant={}", tenant_name))) @@ -136,7 +136,7 @@ pub async fn list_pools( for pool in &tenant.spec.pools { let ss_name = format!("{}-{}", tenant_name, pool.name); - // 查找对应的 StatefulSet + // Match StatefulSet for this pool name let ss = statefulsets .items .iter() @@ -173,7 +173,7 @@ pub async fn list_pools( (0, 0, 0, None, None, "NotCreated".to_string()) }; - // 获取存储配置 + // PVC template / storage size let storage_class = pool .persistence .volume_claim_template @@ -219,7 +219,7 @@ pub async fn list_pools( })) } -/// 添加新的 Pool 到 Tenant(乐观锁重试) +/// Append a pool to `Tenant.spec.pools` with optimistic-lock retries. pub async fn add_pool( Path((namespace, tenant_name)): Path<(String, String)>, Extension(claims): Extension, @@ -228,7 +228,7 @@ pub async fn add_pool( let client = create_client(&claims).await?; let tenant_api: Api = Api::namespaced(client, &namespace); - // 输入校验(pool name 需符合 K8s 资源命名) + // Validate pool name and quantities let pool_name = req.name.trim(); if !is_valid_k8s_name(pool_name) { return Err(Error::BadRequest { @@ -248,7 +248,7 @@ pub async fn add_pool( } let total_volumes = validate_pool_volumes(req.servers, req.volumes_per_server)?; - // 构建新的 Pool + // Build Pool spec let new_pool = Pool { name: pool_name.to_string(), servers: req.servers, @@ -320,7 +320,7 @@ pub async fn add_pool( }, }; - // 乐观锁重试:get -> 校验 -> push -> replace,409 时重试 + // Optimistic loop: get -> validate -> push -> replace; retry on 409 const MAX_RETRIES: u32 = 3; let mut last_conflict = None; for _ in 0..MAX_RETRIES { @@ -376,7 +376,7 @@ pub async fn add_pool( })) } -/// 删除 Pool(乐观锁重试) +/// Remove a pool from the tenant with optimistic-lock retries. pub async fn delete_pool( Path((namespace, tenant_name, pool_name)): Path<(String, String, String)>, Extension(claims): Extension, @@ -440,7 +440,7 @@ pub async fn delete_pool( })) } -/// 创建 Kubernetes 客户端 +/// Build a client using the session bearer token. async fn create_client(claims: &Claims) -> Result { let mut config = kube::Config::infer() .await diff --git a/src/console/handlers/tenants.rs b/src/console/handlers/tenants.rs index 9fe40d8..846211a 100755 --- a/src/console/handlers/tenants.rs +++ b/src/console/handlers/tenants.rs @@ -51,7 +51,7 @@ pub async fn list_all_tenants( Ok(Json(TenantListResponse { tenants: items })) } -/// 按命名空间列出 Tenants +/// List tenants in one namespace. pub async fn list_tenants_by_namespace( Path(namespace): Path, Query(query): Query, @@ -70,7 +70,7 @@ pub async fn list_tenants_by_namespace( Ok(Json(TenantListResponse { tenants: items })) } -/// 统计所有命名空间中 Tenant 的状态数量 +/// Count tenants by state across all namespaces. pub async fn get_all_tenant_state_counts( Extension(claims): Extension, ) -> Result> { @@ -85,7 +85,7 @@ pub async fn get_all_tenant_state_counts( Ok(Json(summarize_tenant_states(&tenants.items))) } -/// 统计指定命名空间中 Tenant 的状态数量 +/// Count tenants by state in one namespace. pub async fn get_tenant_state_counts_by_namespace( Path(namespace): Path, Extension(claims): Extension, @@ -101,7 +101,7 @@ pub async fn get_tenant_state_counts_by_namespace( Ok(Json(summarize_tenant_states(&tenants.items))) } -/// 获取 Tenant 详情 +/// Full tenant detail including Services. pub async fn get_tenant_details( Path((namespace, name)): Path<(String, String)>, Extension(claims): Extension, @@ -114,7 +114,7 @@ pub async fn get_tenant_details( .await .map_err(|e| error::map_kube_error(e, format!("Tenant '{}'", name)))?; - // 获取 Services + // List tenant-scoped Services let svc_api: Api = Api::namespaced(client, &namespace); let services = svc_api .list(&ListParams::default().labels(&format!("rustfs.tenant={}", name))) @@ -188,18 +188,18 @@ pub async fn get_tenant_details( })) } -/// 创建 Tenant +/// Create a Tenant CR (and namespace if missing). pub async fn create_tenant( Extension(claims): Extension, Json(req): Json, ) -> Result> { let client = create_client(&claims).await?; - // 检查 Namespace 是否存在 + // Ensure namespace exists let ns_api: Api = Api::all(client.clone()); let ns_exists = ns_api.get(&req.namespace).await.is_ok(); - // 如果不存在则创建 + // Create when absent if !ns_exists { let ns = corev1::Namespace { metadata: k8s_openapi::apimachinery::pkg::apis::meta::v1::ObjectMeta { @@ -214,7 +214,7 @@ pub async fn create_tenant( .map_err(|e| error::map_kube_error(e, format!("Namespace '{}'", req.namespace)))?; } - // 构造 Tenant CRD + // Build Tenant object let pools: Vec = req .pools .into_iter() @@ -305,7 +305,7 @@ pub async fn create_tenant( })) } -/// 删除 Tenant +/// Delete a Tenant CR. pub async fn delete_tenant( Path((namespace, name)): Path<(String, String)>, Extension(claims): Extension, @@ -323,7 +323,7 @@ pub async fn delete_tenant( })) } -/// 更新 Tenant +/// Patch selected spec fields on a Tenant. pub async fn update_tenant( Path((namespace, name)): Path<(String, String)>, Extension(claims): Extension, @@ -332,13 +332,13 @@ pub async fn update_tenant( let client = create_client(&claims).await?; let api: Api = Api::namespaced(client, &namespace); - // 获取当前 Tenant + // Load current object let mut tenant = api .get(&name) .await .map_err(|e| error::map_kube_error(e, format!("Tenant '{}'", name)))?; - // 应用更新(仅更新提供的字段) + // Merge only provided fields let mut updated_fields = Vec::new(); if let Some(image) = req.image { @@ -442,7 +442,7 @@ pub async fn update_tenant( }); } - // 提交更新 + // Replace status-safe fields let updated_tenant = api .replace(&name, &Default::default(), &tenant) .await @@ -477,7 +477,7 @@ pub async fn update_tenant( })) } -/// 获取 Tenant YAML +/// Return serialized Tenant manifest. pub async fn get_tenant_yaml( Path((namespace, name)): Path<(String, String)>, Extension(claims): Extension, @@ -500,7 +500,7 @@ pub async fn get_tenant_yaml( Ok(Json(TenantYAML { yaml: yaml_str })) } -/// 更新 Tenant YAML +/// Apply raw YAML for a Tenant (server-side apply or replace). pub async fn put_tenant_yaml( Path((namespace, name)): Path<(String, String)>, Extension(claims): Extension, @@ -584,7 +584,7 @@ pub async fn put_tenant_yaml( Ok(Json(TenantYAML { yaml: yaml_str })) } -/// 创建 Kubernetes 客户端 +/// Build a client using the session bearer token. async fn create_client(claims: &Claims) -> Result { let mut config = kube::Config::infer() .await diff --git a/src/console/handlers/topology.rs b/src/console/handlers/topology.rs index fa55361..a7c97c3 100644 --- a/src/console/handlers/topology.rs +++ b/src/console/handlers/topology.rs @@ -27,13 +27,13 @@ use k8s_openapi::api::core::v1 as corev1; use kube::{Api, Client, ResourceExt, api::ListParams}; use std::collections::BTreeMap; -/// 获取集群拓扑总览 +/// Aggregated topology for the dashboard (nodes, namespaces, tenants, pods). pub async fn get_topology_overview( Extension(claims): Extension, ) -> Result> { let client = create_client(&claims).await?; - // 并行获取 nodes, tenants, pods + // Fetch nodes, tenants, and labeled pods concurrently let node_api: Api = Api::all(client.clone()); let tenant_api: Api = Api::all(client.clone()); let pod_api: Api = Api::all(client.clone()); @@ -52,7 +52,7 @@ pub async fn get_topology_overview( let k8s_tenants = tenants_result.map_err(|e| error::map_kube_error(e, "Tenants"))?; let k8s_pods = pods_result.map_err(|e| error::map_kube_error(e, "Pods"))?; - // 构建节点列表 + 集群资源汇总 + // Build node list and sum cluster capacity let mut total_cpu_m: i64 = 0; let mut total_mem_b: i64 = 0; let mut alloc_cpu_m: i64 = 0; @@ -123,7 +123,7 @@ pub async fn get_topology_overview( }) .unwrap_or_default(); - // 累加集群资源 + // Sum cluster-wide CPU/memory total_cpu_m += parse_cpu_to_millicores(&cpu_cap); total_mem_b += parse_memory_to_bytes(&mem_cap); alloc_cpu_m += parse_cpu_to_millicores(&cpu_alloc); @@ -141,7 +141,7 @@ pub async fn get_topology_overview( }) .collect(); - // 按 (namespace, tenant_name) 索引 pods + // Index pods by (namespace, tenant name) let mut pod_index: BTreeMap<(String, String), Vec> = BTreeMap::new(); for pod in &k8s_pods.items { let labels = pod.metadata.labels.as_ref(); @@ -180,7 +180,7 @@ pub async fn get_topology_overview( }); } - // 按 namespace 分组 tenants + // Group tenants by namespace let mut ns_map: BTreeMap> = BTreeMap::new(); for t in &k8s_tenants.items { let ns = t.namespace().unwrap_or_default(); @@ -215,7 +215,7 @@ pub async fn get_topology_overview( .as_ref() .map(|ts| ts.0.to_rfc3339()); - // Pool 信息 + // Per-pool rows from spec + status let pools: Vec = t .spec .pools @@ -251,7 +251,7 @@ pub async fn get_topology_overview( }) .collect(); - // Tenant 摘要 + // Tenant card summary let pool_count = pools.len(); let total_replicas: i32 = pools.iter().map(|p| p.replicas).sum(); let total_capacity_bytes: i64 = t @@ -268,7 +268,7 @@ pub async fn get_topology_overview( let console_endpoint = Some(format!("http://{}-console.{}.svc:9001", name, namespace)); - // 匹配 pods + // Attach pods collected earlier let key = (namespace.clone(), name.clone()); let tenant_pods = pod_index.remove(&key); @@ -302,7 +302,7 @@ pub async fn get_topology_overview( }) .collect(); - // 集群信息 + // Cluster header + rolled-up stats let cluster = TopologyCluster { id: "rustfs-cluster".to_string(), name: std::env::var("CLUSTER_NAME").unwrap_or_else(|_| "RustFS Cluster".to_string()), @@ -326,12 +326,12 @@ pub async fn get_topology_overview( })) } -/// 判断 Tenant 状态是否健康 +/// Whether the tenant aggregate state counts as healthy for the UI. fn is_healthy_state(state: &str) -> bool { matches!(state, "Ready" | "Initialized") } -/// 将 PoolState 映射到前端状态字符串 +/// Map operator `PoolState` to a short UI label. fn map_pool_state(state: &PoolState) -> String { match state { PoolState::Created | PoolState::Initialized | PoolState::RolloutComplete => { @@ -343,7 +343,7 @@ fn map_pool_state(state: &PoolState) -> String { } } -/// 从 PersistenceConfig 获取每个 volume 的字节数 +/// Bytes per PVC volume from `PersistenceConfig` (default 10Gi). fn get_per_volume_bytes( persistence: &crate::types::v1alpha1::persistence::PersistenceConfig, ) -> i64 { @@ -359,7 +359,7 @@ fn get_per_volume_bytes( .unwrap_or(DEFAULT_BYTES) } -/// 将字节数格式化为可读存储字符串(优先 TiB, GiB) +/// Human-readable storage size (prefer TiB/GiB). fn format_storage_bytes(b: i64) -> String { const TIB: i64 = 1024 * 1024 * 1024 * 1024; const GIB: i64 = 1024 * 1024 * 1024; @@ -383,7 +383,7 @@ fn format_storage_bytes(b: i64) -> String { } } -/// 获取 Kubernetes 集群版本 +/// Kubernetes apiserver version (major.minor). async fn get_cluster_version(client: &Client) -> String { match client.apiserver_version().await { Ok(info) => format!("v{}.{}", info.major, info.minor), @@ -391,7 +391,7 @@ async fn get_cluster_version(client: &Client) -> String { } } -/// 创建 Kubernetes 客户端 +/// Build a client using the session bearer token. async fn create_client(claims: &Claims) -> Result { let mut config = kube::Config::infer() .await diff --git a/src/console/middleware/auth.rs b/src/console/middleware/auth.rs index ed1024a..ba3a41b 100755 --- a/src/console/middleware/auth.rs +++ b/src/console/middleware/auth.rs @@ -22,19 +22,19 @@ use jsonwebtoken::{DecodingKey, Validation, decode}; use crate::console::state::{AppState, Claims}; -/// JWT 认证中间件 +/// JWT session middleware. /// -/// 从 Cookie 中提取 JWT Token,验证后将 Claims 注入到请求扩展中 +/// Reads the `session` cookie, validates the JWT, and inserts `Claims` into request extensions. pub async fn auth_middleware( State(state): State, mut request: Request, next: Next, ) -> Result { - // 跳过 OPTIONS(CORS 预检),避免 401 导致浏览器报 CORS 错误 + // Allow CORS preflight without 401 (browser would treat as CORS failure) if request.method() == Method::OPTIONS { return Ok(next.run(request).await); } - // 跳过公开路径 + // Unauthenticated paths let path = request.uri().path(); if path == "/healthz" || path == "/readyz" @@ -45,7 +45,7 @@ pub async fn auth_middleware( return Ok(next.run(request).await); } - // 从 Cookie 中提取 Token + // Parse session cookie let cookies = request .headers() .get(header::COOKIE) @@ -54,7 +54,7 @@ pub async fn auth_middleware( let token = parse_session_cookie(cookies).ok_or(StatusCode::UNAUTHORIZED)?; - // 验证 JWT + // Verify JWT signature and claims let claims = decode::( &token, &DecodingKey::from_secret(state.jwt_secret.as_bytes()), @@ -66,20 +66,20 @@ pub async fn auth_middleware( })? .claims; - // 检查过期时间 + // Reject expired tokens let now = chrono::Utc::now().timestamp() as usize; if claims.exp < now { tracing::warn!("Token expired"); return Err(StatusCode::UNAUTHORIZED); } - // 将 Claims 注入请求扩展 + // Stash claims for handlers request.extensions_mut().insert(claims); Ok(next.run(request).await) } -/// 从 Cookie 字符串中解析 session token +/// Extract `session=` from a raw `Cookie` header value fn parse_session_cookie(cookies: &str) -> Option { cookies.split(';').find_map(|cookie| { let parts: Vec<&str> = cookie.trim().splitn(2, '=').collect(); diff --git a/src/console/mod.rs b/src/console/mod.rs index d223c5e..c46315e 100755 --- a/src/console/mod.rs +++ b/src/console/mod.rs @@ -12,9 +12,9 @@ // See the License for the specific language governing permissions and // limitations under the License. -//! Console 模块 +//! Console HTTP API module. //! -//! RustFS Operator Console - Web 管理界面 +//! RustFS Operator web management API (Axum). pub mod error; pub mod handlers; diff --git a/src/console/models/auth.rs b/src/console/models/auth.rs index 6b22eee..a8333cc 100755 --- a/src/console/models/auth.rs +++ b/src/console/models/auth.rs @@ -15,21 +15,21 @@ use serde::{Deserialize, Serialize}; use utoipa::ToSchema; -/// 登录请求 +/// Login request body #[derive(Debug, Deserialize, ToSchema)] pub struct LoginRequest { /// Kubernetes ServiceAccount Token pub token: String, } -/// 登录响应 +/// Login response #[derive(Debug, Serialize, ToSchema)] pub struct LoginResponse { pub success: bool, pub message: String, } -/// 会话检查响应 +/// Session check response #[derive(Debug, Serialize, ToSchema)] pub struct SessionResponse { pub valid: bool, diff --git a/src/console/models/cluster.rs b/src/console/models/cluster.rs index 82a2a66..4b4e7aa 100755 --- a/src/console/models/cluster.rs +++ b/src/console/models/cluster.rs @@ -15,7 +15,7 @@ use serde::Serialize; use utoipa::ToSchema; -/// 节点信息 +/// Node summary for the cluster API #[derive(Debug, Serialize, ToSchema)] pub struct NodeInfo { pub name: String, @@ -27,13 +27,13 @@ pub struct NodeInfo { pub memory_allocatable: String, } -/// 节点列表响应 +/// Response listing cluster nodes #[derive(Debug, Serialize, ToSchema)] pub struct NodeListResponse { pub nodes: Vec, } -/// Namespace 列表项 +/// Single namespace row in a list #[derive(Debug, Serialize, ToSchema)] pub struct NamespaceItem { pub name: String, @@ -41,19 +41,19 @@ pub struct NamespaceItem { pub created_at: Option, } -/// Namespace 列表响应 +/// Response listing namespaces #[derive(Debug, Serialize, ToSchema)] pub struct NamespaceListResponse { pub namespaces: Vec, } -/// 创建 Namespace 请求 +/// Request body to create a namespace #[derive(Debug, serde::Deserialize, ToSchema)] pub struct CreateNamespaceRequest { pub name: String, } -/// 集群资源响应 +/// Aggregated cluster capacity / allocatable resources #[derive(Debug, Serialize, ToSchema)] pub struct ClusterResourcesResponse { pub total_nodes: usize, diff --git a/src/console/models/event.rs b/src/console/models/event.rs index 58c5804..dafa9f2 100755 --- a/src/console/models/event.rs +++ b/src/console/models/event.rs @@ -15,7 +15,7 @@ use serde::Serialize; use utoipa::ToSchema; -/// Event 列表项 +/// Single Kubernetes event row for the UI #[derive(Debug, Serialize, ToSchema)] pub struct EventItem { pub event_type: String, @@ -27,7 +27,7 @@ pub struct EventItem { pub count: i32, } -/// Event 列表响应 +/// Response listing events #[derive(Debug, Serialize, ToSchema)] pub struct EventListResponse { pub events: Vec, diff --git a/src/console/models/pod.rs b/src/console/models/pod.rs index a94f759..439baef 100755 --- a/src/console/models/pod.rs +++ b/src/console/models/pod.rs @@ -15,7 +15,7 @@ use serde::{Deserialize, Serialize}; use utoipa::ToSchema; -/// Pod 列表项 +/// Single pod row in a tenant pod list #[derive(Debug, Serialize, ToSchema)] pub struct PodListItem { pub name: String, @@ -29,13 +29,13 @@ pub struct PodListItem { pub created_at: Option, } -/// Pod 列表响应 +/// Response listing pods #[derive(Debug, Serialize, ToSchema)] pub struct PodListResponse { pub pods: Vec, } -/// Pod 详情 +/// Full pod detail for the UI #[derive(Debug, Serialize, ToSchema)] pub struct PodDetails { pub name: String, @@ -51,7 +51,7 @@ pub struct PodDetails { pub created_at: Option, } -/// Pod 状态 +/// Phase, conditions, and networking summary #[derive(Debug, Serialize, ToSchema)] pub struct PodStatus { pub phase: String, @@ -61,7 +61,7 @@ pub struct PodStatus { pub start_time: Option, } -/// Pod 条件 +/// One Kubernetes pod condition #[derive(Debug, Serialize, ToSchema)] pub struct PodCondition { #[serde(rename = "type")] @@ -72,7 +72,7 @@ pub struct PodCondition { pub last_transition_time: Option, } -/// 容器信息 +/// Container status summary #[derive(Debug, Serialize, ToSchema)] pub struct ContainerInfo { pub name: String, @@ -82,7 +82,7 @@ pub struct ContainerInfo { pub state: ContainerState, } -/// 容器状态 +/// Container lifecycle state #[derive(Debug, Serialize, ToSchema)] #[serde(tag = "status")] pub enum ContainerState { @@ -100,7 +100,7 @@ pub enum ContainerState { }, } -/// Volume 信息 +/// Volume mount / PVC reference #[derive(Debug, Serialize, ToSchema)] pub struct VolumeInfo { pub name: String, @@ -108,35 +108,35 @@ pub struct VolumeInfo { pub claim_name: Option, } -/// 删除 Pod 响应 +/// Response after deleting a pod #[derive(Debug, Serialize, ToSchema)] pub struct DeletePodResponse { pub success: bool, pub message: String, } -/// 重启 Pod 请求 +/// Optional flags when restarting a pod (delete/recreate) #[derive(Debug, Deserialize, ToSchema)] pub struct RestartPodRequest { #[serde(default)] pub force: bool, } -/// Pod 日志请求参数 +/// Query parameters for pod log streaming #[derive(Debug, Deserialize, ToSchema)] pub struct LogsQuery { - /// 容器名称 + /// Container name (if multi-container) pub container: Option, - /// 尾部行数 + /// Number of lines from the end of the log #[serde(default = "default_tail_lines")] pub tail_lines: i64, - /// 是否跟随 + /// Stream new lines (follow) #[serde(default)] pub follow: bool, - /// 显示时间戳 + /// Prefix each line with a timestamp #[serde(default)] pub timestamps: bool, - /// 从指定时间开始(RFC3339 格式) + /// Only log lines after this instant (RFC3339) pub since_time: Option, } diff --git a/src/console/models/pool.rs b/src/console/models/pool.rs index 7305fb7..8f294d9 100755 --- a/src/console/models/pool.rs +++ b/src/console/models/pool.rs @@ -15,7 +15,7 @@ use serde::{Deserialize, Serialize}; use utoipa::ToSchema; -/// Pool 信息(扩展版) +/// Extended pool details for list/detail views #[derive(Debug, Serialize, ToSchema)] pub struct PoolDetails { pub name: String, @@ -33,13 +33,13 @@ pub struct PoolDetails { pub created_at: Option, } -/// Pool 列表响应 +/// Response listing pools for a tenant #[derive(Debug, Serialize, ToSchema)] pub struct PoolListResponse { pub pools: Vec, } -/// 添加 Pool 请求 +/// Request body to add a pool to a tenant #[derive(Debug, Deserialize, ToSchema)] #[serde(rename_all = "camelCase")] pub struct AddPoolRequest { @@ -49,26 +49,26 @@ pub struct AddPoolRequest { pub storage_size: String, pub storage_class: Option, - // 可选的调度配置 + // Optional scheduling overrides pub node_selector: Option>, pub resources: Option, } -/// 资源需求 +/// CPU/memory requests and limits #[derive(Debug, Deserialize, Serialize, ToSchema)] pub struct ResourceRequirements { pub requests: Option, pub limits: Option, } -/// 资源列表 +/// Named resource quantities (e.g. cpu, memory) #[derive(Debug, Deserialize, Serialize, ToSchema)] pub struct ResourceList { pub cpu: Option, pub memory: Option, } -/// 删除 Pool 响应 +/// Response after deleting a pool #[derive(Debug, Serialize, ToSchema)] pub struct DeletePoolResponse { pub success: bool, @@ -76,7 +76,7 @@ pub struct DeletePoolResponse { pub warning: Option, } -/// Pool 添加响应 +/// Response after adding a pool #[derive(Debug, Serialize, ToSchema)] pub struct AddPoolResponse { pub success: bool, diff --git a/src/console/models/tenant.rs b/src/console/models/tenant.rs index 5b194c2..f8b1131 100755 --- a/src/console/models/tenant.rs +++ b/src/console/models/tenant.rs @@ -15,7 +15,7 @@ use serde::{Deserialize, Serialize}; use utoipa::ToSchema; -/// Tenant 列表项 +/// Single tenant row in a list view #[derive(Debug, Serialize, ToSchema)] pub struct TenantListItem { pub name: String, @@ -25,7 +25,7 @@ pub struct TenantListItem { pub created_at: Option, } -/// Pool 信息 +/// Pool summary embedded in tenant list/detail #[derive(Debug, Serialize, ToSchema)] pub struct PoolInfo { pub name: String, @@ -33,29 +33,29 @@ pub struct PoolInfo { pub volumes_per_server: i32, } -/// Tenant 列表响应 +/// Response listing tenants #[derive(Debug, Serialize, ToSchema)] pub struct TenantListResponse { pub tenants: Vec, } -/// Tenant 列表查询参数 +/// Query parameters for listing tenants #[derive(Debug, Deserialize, ToSchema, Default)] pub struct TenantListQuery { - /// 按状态过滤(大小写不敏感) + /// Filter by tenant state (case-insensitive) pub state: Option, } -/// Tenant 状态统计响应 +/// Per-state tenant counts #[derive(Debug, Serialize, ToSchema)] pub struct TenantStateCountsResponse { - /// Tenant 总数 + /// Total number of tenants pub total: u32, - /// 各状态对应的数量,例如 Ready/Updating/Degraded/NotReady/Unknown + /// Counts keyed by state, e.g. Ready/Updating/Degraded/NotReady/Unknown pub counts: std::collections::BTreeMap, } -/// Tenant 详情响应 +/// Full tenant detail for the UI #[derive(Debug, Serialize, ToSchema)] pub struct TenantDetailsResponse { pub name: String, @@ -68,7 +68,7 @@ pub struct TenantDetailsResponse { pub services: Vec, } -/// Service 信息 +/// Exposed Service summary #[derive(Debug, Serialize, ToSchema)] pub struct ServiceInfo { pub name: String, @@ -76,7 +76,7 @@ pub struct ServiceInfo { pub ports: Vec, } -/// Service 端口信息 +/// Port mapping for a Service #[derive(Debug, Serialize, ToSchema)] pub struct ServicePort { pub name: String, @@ -94,7 +94,7 @@ pub struct CreateSecurityContextRequest { pub run_as_non_root: Option, } -/// 创建 Tenant 请求 +/// Request body to create a tenant #[derive(Debug, Deserialize, ToSchema)] pub struct CreateTenantRequest { pub name: String, @@ -107,7 +107,7 @@ pub struct CreateTenantRequest { pub security_context: Option, } -/// 创建 Pool 请求 +/// Pool spec embedded in create-tenant request #[derive(Debug, Deserialize, ToSchema)] pub struct CreatePoolRequest { pub name: String, @@ -117,47 +117,47 @@ pub struct CreatePoolRequest { pub storage_class: Option, } -/// 删除 Tenant 响应 +/// Response after deleting a tenant #[derive(Debug, Serialize, ToSchema)] pub struct DeleteTenantResponse { pub success: bool, pub message: String, } -/// 更新 Tenant 请求 +/// Partial update payload for a tenant #[derive(Debug, Deserialize, ToSchema)] #[serde(rename_all = "camelCase")] pub struct UpdateTenantRequest { - /// 更新镜像版本 + /// New container image pub image: Option, - /// 更新挂载路径 + /// New volume mount path pub mount_path: Option, - /// 更新环境变量 + /// Replace env vars pub env: Option>, - /// 更新凭证 Secret + /// Reference to credentials Secret pub creds_secret: Option, - /// 更新 Pod 管理策略 + /// Pod management policy pub pod_management_policy: Option, - /// 更新镜像拉取策略 + /// Image pull policy pub image_pull_policy: Option, - /// 更新日志配置 + /// Logging sidecar / volume settings pub logging: Option, } -/// 环境变量 +/// Key/value environment variable #[derive(Debug, Deserialize, Serialize, ToSchema)] pub struct EnvVar { pub name: String, pub value: Option, } -/// 日志配置 +/// Tenant logging configuration #[derive(Debug, Deserialize, Serialize, ToSchema)] #[serde(rename_all = "camelCase")] pub struct LoggingConfig { @@ -166,7 +166,7 @@ pub struct LoggingConfig { pub storage_class: Option, } -/// 更新 Tenant 响应 +/// Response after updating a tenant #[derive(Debug, Serialize, ToSchema)] pub struct UpdateTenantResponse { pub success: bool, @@ -174,7 +174,7 @@ pub struct UpdateTenantResponse { pub tenant: TenantListItem, } -/// Tenant YAML 请求/响应 +/// Raw Tenant manifest get/update payload #[derive(Debug, Deserialize, Serialize, ToSchema)] pub struct TenantYAML { pub yaml: String, diff --git a/src/console/models/topology.rs b/src/console/models/topology.rs index 4465141..bffa49a 100644 --- a/src/console/models/topology.rs +++ b/src/console/models/topology.rs @@ -15,7 +15,7 @@ use serde::Serialize; use utoipa::ToSchema; -/// 拓扑总览响应 +/// Topology overview: cluster, namespaces, nodes #[derive(Debug, Serialize, ToSchema)] pub struct TopologyOverviewResponse { pub cluster: TopologyCluster, @@ -23,7 +23,7 @@ pub struct TopologyOverviewResponse { pub nodes: Vec, } -/// 集群信息 +/// Cluster identity and version #[derive(Debug, Serialize, ToSchema)] pub struct TopologyCluster { pub id: String, @@ -32,7 +32,7 @@ pub struct TopologyCluster { pub summary: TopologyClusterSummary, } -/// 集群摘要 +/// Rolled-up capacity and tenant health counts #[derive(Debug, Serialize, ToSchema)] pub struct TopologyClusterSummary { pub nodes: usize, @@ -45,7 +45,7 @@ pub struct TopologyClusterSummary { pub allocatable_memory: String, } -/// 命名空间拓扑 +/// Namespace with nested tenants #[derive(Debug, Serialize, ToSchema)] pub struct TopologyNamespace { pub name: String, @@ -54,7 +54,7 @@ pub struct TopologyNamespace { pub tenants: Vec, } -/// Tenant 拓扑 +/// Tenant node in the topology tree #[derive(Debug, Serialize, ToSchema)] pub struct TopologyTenant { pub name: String, @@ -68,7 +68,7 @@ pub struct TopologyTenant { pub pods: Option>, } -/// Tenant 摘要 +/// Short tenant stats for topology cards #[derive(Debug, Serialize, ToSchema)] pub struct TopologyTenantSummary { pub pool_count: usize, @@ -79,7 +79,7 @@ pub struct TopologyTenantSummary { pub console_endpoint: Option, } -/// Pool 拓扑 +/// Pool row under a tenant #[derive(Debug, Serialize, ToSchema)] pub struct TopologyPool { pub name: String, @@ -90,7 +90,7 @@ pub struct TopologyPool { pub capacity: String, } -/// Pod 拓扑 +/// Pod row under a tenant #[derive(Debug, Serialize, ToSchema)] pub struct TopologyPod { pub name: String, @@ -100,7 +100,7 @@ pub struct TopologyPod { pub node: Option, } -/// 节点信息 +/// Node row for topology sidebar #[derive(Debug, Serialize, ToSchema)] pub struct TopologyNode { pub name: String, diff --git a/src/console/routes/mod.rs b/src/console/routes/mod.rs index 57dde57..20f8432 100755 --- a/src/console/routes/mod.rs +++ b/src/console/routes/mod.rs @@ -19,7 +19,7 @@ use axum::{ use crate::console::{handlers, state::AppState}; -/// 认证路由 +/// Login / session routes (partially unauthenticated) pub fn auth_routes() -> Router { Router::new() .route("/login", post(handlers::auth::login)) @@ -27,7 +27,7 @@ pub fn auth_routes() -> Router { .route("/session", get(handlers::auth::session_check)) } -/// Tenant 管理路由 +/// Tenant CRUD, YAML, encryption, security context pub fn tenant_routes() -> Router { Router::new() .route("/tenants", get(handlers::tenants::list_all_tenants)) @@ -82,7 +82,7 @@ pub fn tenant_routes() -> Router { ) } -/// Pool 管理路由 +/// Pool list / add / delete under a tenant pub fn pool_routes() -> Router { Router::new() .route( @@ -99,7 +99,7 @@ pub fn pool_routes() -> Router { ) } -/// Pod 管理路由 +/// Pod list, detail, delete, restart, logs pub fn pod_routes() -> Router { Router::new() .route( @@ -124,7 +124,7 @@ pub fn pod_routes() -> Router { ) } -/// 事件管理路由 +/// Kubernetes events for a tenant pub fn event_routes() -> Router { Router::new().route( "/namespaces/:namespace/tenants/:tenant/events", @@ -132,7 +132,7 @@ pub fn event_routes() -> Router { ) } -/// 集群资源路由 +/// Nodes, cluster capacity, namespaces pub fn cluster_routes() -> Router { Router::new() .route("/cluster/nodes", get(handlers::cluster::list_nodes)) @@ -144,7 +144,7 @@ pub fn cluster_routes() -> Router { .route("/namespaces", post(handlers::cluster::create_namespace)) } -/// 拓扑总览路由 +/// Topology overview for the dashboard pub fn topology_routes() -> Router { Router::new().route( "/topology/overview", diff --git a/src/console/server.rs b/src/console/server.rs index a11f962..806c016 100755 --- a/src/console/server.rs +++ b/src/console/server.rs @@ -48,11 +48,11 @@ fn cors_allowed_origins() -> Vec { if parsed.is_empty() { default } else { parsed } } -/// 启动 Console HTTP Server +/// Start the Console HTTP server (Axum). pub async fn run(port: u16) -> Result<(), Box> { tracing::info!("Starting RustFS Operator Console on port {}", port); - // 生成 JWT 密钥 (实际生产应从环境变量读取) + // JWT signing secret (set JWT_SECRET in production) let jwt_secret = std::env::var("JWT_SECRET") .unwrap_or_else(|_| "rustfs-console-secret-change-me-in-production".to_string()); @@ -60,18 +60,18 @@ pub async fn run(port: u16) -> Result<(), Box> { let cors_origins = cors_allowed_origins(); - // 构建应用。CorsLayer 放在最外层,使 OPTIONS 预检由 CORS 直接响应,避免被 auth 或路由影响。 + // CorsLayer is outermost so OPTIONS preflight is answered by CORS before auth/routing. let app = Router::new() - // 健康检查 (无需认证) + // Liveness (unauthenticated) .route("/healthz", get(health_check)) .route("/readyz", get(ready_check)) - // Swagger UI (无需认证) + // OpenAPI / Swagger (unauthenticated) .merge(SwaggerUi::new("/swagger-ui").url("/api-docs/openapi.json", ApiDoc::openapi())) - // API v1 路由 + // REST API v1 .nest("/api/v1", api_routes()) - // 应用状态 + // Shared state .with_state(state.clone()) - // 中间件:最后添加的最先执行,故请求顺序为 Trace -> Compression -> Cors -> auth + // Middleware runs in reverse order: Trace -> Compression -> Cors -> auth .layer(middleware::from_fn_with_state( state.clone(), crate::console::middleware::auth::auth_middleware, @@ -92,7 +92,7 @@ pub async fn run(port: u16) -> Result<(), Box> { .layer(CompressionLayer::new()) .layer(TraceLayer::new_for_http()); - // 启动服务器 + // Bind and serve let addr = std::net::SocketAddr::from(([0, 0, 0, 0], port)); let listener = tokio::net::TcpListener::bind(addr).await?; @@ -107,7 +107,7 @@ pub async fn run(port: u16) -> Result<(), Box> { Ok(()) } -/// API 路由组合 +/// Merge all `/api/v1` route trees. fn api_routes() -> Router { Router::new() .merge(routes::auth_routes()) @@ -119,7 +119,7 @@ fn api_routes() -> Router { .merge(routes::topology_routes()) } -/// 健康检查 +/// Liveness probe: always OK if process runs. async fn health_check() -> impl IntoResponse { let since_epoch = std::time::SystemTime::now() .duration_since(std::time::UNIX_EPOCH) @@ -127,7 +127,7 @@ async fn health_check() -> impl IntoResponse { (StatusCode::OK, format!("OK: {}", since_epoch.as_secs())) } -/// 就绪检查:验证 K8s API 可连通 +/// Readiness: Kubernetes API reachable. async fn ready_check() -> impl IntoResponse { match check_k8s_connectivity().await { Ok(()) => (StatusCode::OK, "Ready".to_string()), @@ -138,7 +138,7 @@ async fn ready_check() -> impl IntoResponse { } } -/// 验证 K8s 连接:加载配置、创建客户端、执行轻量级 API 调用 +/// Load kubeconfig, build client, list namespaces (limit 1). async fn check_k8s_connectivity() -> Result<(), String> { let config = kube::Config::infer() .await diff --git a/src/console/state.rs b/src/console/state.rs index b6b1fce..39f2cac 100755 --- a/src/console/state.rs +++ b/src/console/state.rs @@ -14,17 +14,17 @@ use std::sync::Arc; -/// Console 应用状态 +/// Shared Axum application state. /// -/// 包含 JWT 密钥等全局配置 +/// Holds global config such as the JWT signing secret. #[derive(Clone)] pub struct AppState { - /// JWT 签名密钥 + /// Symmetric key for signing session JWTs pub jwt_secret: Arc, } impl AppState { - /// 创建新的应用状态 + /// Build state with the given JWT secret pub fn new(jwt_secret: String) -> Self { Self { jwt_secret: Arc::new(jwt_secret), @@ -37,14 +37,14 @@ impl AppState { pub struct Claims { /// Kubernetes ServiceAccount Token pub k8s_token: String, - /// Token 过期时间 (Unix timestamp) + /// Expiry time (Unix seconds) pub exp: usize, - /// Token 签发时间 + /// Issued-at time (Unix seconds) pub iat: usize, } impl Claims { - /// 创建新的 Claims (12 小时有效期) + /// New claims with a 12-hour lifetime pub fn new(k8s_token: String) -> Self { let now = chrono::Utc::now().timestamp() as usize; Self {