diff --git a/platform-enterprise_docs/enterprise-sidebar.json b/platform-enterprise_docs/enterprise-sidebar.json index 5cd7691ca..5c6597971 100644 --- a/platform-enterprise_docs/enterprise-sidebar.json +++ b/platform-enterprise_docs/enterprise-sidebar.json @@ -222,15 +222,23 @@ }, { "type": "category", - "label": "Seqera AI CLI", + "label": "Co-Scientist", "link": {"type": "doc", "id": "seqera-ai/index"}, "collapsed": true, "items": [ "seqera-ai/get-started", "seqera-ai/installation", "seqera-ai/authentication", - "seqera-ai/command-approval", + "seqera-ai/usage-and-cost", + "seqera-ai/skills", + "seqera-ai/modes", "seqera-ai/use-cases", + "seqera-ai/command-approval", + "seqera-ai/skill-claude-code", + "seqera-ai/skill-codex", + "seqera-ai/skill-github-copilot", + "seqera-ai/skill-other-agents", + "seqera-ai/nextflow-lsp", "seqera-ai/projects" ] }, diff --git a/platform-enterprise_docs/enterprise/install-seqera-ai.md b/platform-enterprise_docs/enterprise/install-seqera-ai.md index cb4a394f2..de8d6012e 100644 --- a/platform-enterprise_docs/enterprise/install-seqera-ai.md +++ b/platform-enterprise_docs/enterprise/install-seqera-ai.md @@ -1,233 +1,271 @@ --- title: "Seqera AI" description: Install and configure Seqera AI for Seqera Platform Enterprise -date: "2026-02-03" +date created: "2026-05-04" +last updated: "2026-05-05" tags: [seqera-ai, installation, deployment, aws, helm] --- :::caution -Seqera AI requires Seqera Platform Enterprise 26.1 or later for the agent backend, MCP server, portal web interface, and CLI integration. +Seqera AI requires Seqera Platform Enterprise 25.3.6 or later. This guide covers the Enterprise 26.1 deployment path for the agent backend, MCP server, portal web interface, and CLI. ::: -Seqera AI is an intelligent command-line assistant that helps you build, run, and manage bioinformatics workflows. This guide describes how to deploy Seqera AI in a Seqera Enterprise deployment. +Seqera AI adds Co-Scientist to Seqera Platform Enterprise. Deploy the agent backend, Seqera MCP server, and portal web interface alongside Platform to provide CLI and browser-based AI assistance for workflows, data, projects, and Platform resources. ## Prerequisites -Before you begin, you need: +Before you begin, make sure you have: -- **Seqera Enterprise 26.1+** deployed via [Helm](./platform-helm.md) -- **MySQL 8.0+ database** -- **API key** from a supported inference provider (see below) -- **MCP server** deployed and accessible from your cluster -- **OIDC-compatible identity provider** for the portal web interface, MCP server, and CLI login flow -- **Token encryption key** for encrypting sensitive tokens at rest. Generate with: +- Seqera Platform Enterprise 25.3.6 or later deployed with the [Seqera Platform Helm chart](./platform-helm.md). +- Helm v3 and `kubectl` installed locally. +- DNS names and TLS certificates for the Platform, agent backend, MCP server, and portal web interface hosts. By default, the Helm charts derive `mcp.`, `ai-api.`, and `ai.`. Override `global.mcpDomain`, `global.agentBackendDomain`, and `global.portalWebDomain` if you use different hostnames. +- Access to pull the images required by the Helm charts from the configured container registry, or mirrored copies in your internal registry. See [Seqera container images](./advanced-topics/seqera-container-images.md) and [Mirroring container images](./configuration/mirroring.md). +- A MySQL 8.4 LTS-compatible database for the agent backend. You can use the same MySQL instance as Platform with a separate database and user, or a separate instance. +- A Redis 7.2-compatible or Valkey 7.2-compatible instance for agent backend task coordination. +- A stable Fernet token encryption key for the agent backend if you use Kustomize or need encrypted values to survive chart upgrades. Helm-only installs can let the chart generate this key, but explicitly setting it avoids accidental regeneration. +- Access to the required Claude model or inference profile and the Amazon Titan embedding model in AWS Bedrock. See AWS documentation to [add or remove access to Amazon Bedrock foundation models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-access-modify.html). - ```bash - python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" - ``` -- [Helm v3](https://helm.sh/docs/intro/install) and [kubectl](https://kubernetes.io/docs/tasks/tools/) installed locally +Generate a Fernet token encryption key when you set the key manually: -## Supported inference providers - -Seqera AI uses Claude models from Anthropic. The following inference providers are supported for Enterprise deployments: +```bash +python -c "from cryptography.fernet import Fernet; print(Fernet.generate_key().decode())" +``` -| Inference provider | Description | -|--------------------|-------------| -| **Anthropic API** | Direct access to Claude models via Anthropic's API ([console.anthropic.com](https://console.anthropic.com/)) | -| **AWS Bedrock** | Access Claude models through [AWS Bedrock](https://aws.amazon.com/bedrock/) in your AWS account | +Store database passwords, Redis or Valkey passwords, OIDC token values, image pull credentials, and encryption keys in Kubernetes Secrets. Reference those Secrets from Helm values instead of committing plaintext values. -## Architecture +## Inference provider -Seqera AI connects your local CLI environment to your Platform resources through a secure backend service: +Seqera AI supports AWS Bedrock for Enterprise deployments. The agent backend uses Bedrock for Claude inference and Titan embeddings. -![Seqera AI infrastructure architecture](./_images/seqera-ai-infrastructure.png) +| Inference provider | Description | +| --- | --- | +| AWS Bedrock | Runs inference and embeddings in your AWS account. | -**Components:** +## Components | Component | Description | -|-----------|-------------| -| **Agent backend** | FastAPI service that orchestrates AI interactions. Deployed as a Helm subchart alongside Platform. | -| **MCP server** | Model Context Protocol server providing Platform-aware tools (workflows, datasets, compute environments). | -| **Portal web interface** | Browser-based interface for Seqera AI and related Platform features. | -| **MySQL database** | Dedicated database for session state and conversation history. **Separate from Platform database**. | +| --- | --- | +| Agent backend | FastAPI and LangGraph service that orchestrates Co-Scientist sessions, validates Platform tokens, calls AWS Bedrock, connects to MCP, and streams Server-Sent Events (SSE) to clients. | +| Seqera MCP server | Model Context Protocol server that exposes Platform-aware tools for workflows, datasets, compute environments, Wave, Hub, and nf-core. | +| Portal web interface | Browser interface for Co-Scientist chat, projects, thread history, report viewing, and related Platform workflows. | +| MySQL | Agent backend database for sessions, threads, token usage records, and conversation history. | +| Redis or Valkey | Agent backend queue and coordination store. | + +## Deployment topology -**Flow:** +The recommended Enterprise topology is a single Platform Helm release with the `mcp`, `agent-backend`, and `portal-web` subcharts enabled. The Platform Helm chart includes these subcharts, and the parent chart wires the MCP OIDC client registration token from the Platform backend Secret automatically. -1. Users authenticate via `seqera login`, which initiates OIDC authentication with Platform. -1. The CLI creates a session with the agent backend, passing the Platform access token. -1. The agent backend validates tokens against Platform's `/user-info` endpoint. -1. User prompts are processed by the inference provider, which can invoke Platform tools via MCP. -1. MCP tools execute Platform operations using the user's credentials. -1. Results stream back to the CLI via Server-Sent Events (SSE). +Use separate Helm releases only when your environment requires separate lifecycle ownership. If you deploy the charts separately, you must manually configure: + +- `global.platformServiceAddress` and `global.platformServicePort` so the agent backend and portal web interface can reach the internal Platform backend service. +- `mcp.oidcToken.existingSecretName` and `mcp.oidcToken.existingSecretKey` with the same OIDC client registration token configured for the Platform backend. +- Matching external DNS and TLS for `global.mcpDomain`, `global.agentBackendDomain`, and `global.portalWebDomain`. ## Configure Helm values -The Seqera AI components can be installed using the [Seqera Helm charts](https://github.com/seqeralabs/helm-charts). Refer to the examples in the repository for sample configurations. -Some values (like database passwords, API keys, sensitive OIDC settings, cryptographic keys) are recommended to be stored as Kubernetes secrets and referenced in the Helm values in production installations, rather than be specified as plain text. +Enable the three Seqera AI subcharts in your Platform values file. This example uses the Platform parent chart, so the same Helm release also deploys or upgrades Platform. Include the required Platform values from your existing installation in addition to these Seqera AI values. -The Seqera AI components can be installed alongside Platform and other subcharts in a single Helm release, or can be installed individually as separate releases. +```yaml +global: + platformExternalDomain: platform.example.com + mcpDomain: mcp.platform.example.com + agentBackendDomain: ai-api.platform.example.com + portalWebDomain: ai.platform.example.com -Documentation for the individual charts is available at: -- [Agent backend](https://github.com/seqeralabs/helm-charts/tree/master/platform/charts/agent-backend) -- [MCP server](https://github.com/seqeralabs/helm-charts/tree/master/platform/charts/mcp) -- [Portal web interface](https://github.com/seqeralabs/helm-charts/tree/master/platform/charts/portal-web) +mcp: + enabled: true -### Additional configuration +agent-backend: + enabled: true -The following optional environment variables are not covered by the Helm chart values. Set them in the `.extraEnvVars` section of each chart as needed. +portal-web: + enabled: true +``` -#### Agent backend +For a complete example, see the [Seqera AI Helm example](https://github.com/seqeralabs/helm-charts/tree/master/charts/platform/examples/seqera-ai). -| Variable | Description | Default | -|----------|-------------|---------| -| `ANTHROPIC_MODEL` | Primary model for AI interactions | `claude-sonnet-4-6` | -| `FAST_MODEL` | Model for quick tasks (search, summaries) | `claude-haiku-4-5-20251001` | -| `DEEP_MODEL` | Model for complex planning tasks | `claude-opus-4-5-20251101` | -| `SEQERA_PLATFORM_URL` | Platform UI URL for constructing links to runs and pipelines | Automatically derived from platform domain | -| `SESSION_TIMEOUT_SECONDS` | Session timeout | `86400` (24 hours) | -| `MAX_SESSIONS_PER_USER` | Max concurrent sessions per user | `10` | -| `SESSION_RETENTION_DAYS` | Days to retain session data | `14` | -| `CORS_ORIGINS` | Allowed CORS origins (JSON array) | `["*"]` | +## Configure MCP -## Verify the installation +When MCP runs under the Platform parent chart, leave `mcp.oidcToken` unset unless you need to override the default wiring. The parent chart sets it to the Platform backend Secret key `OIDC_CLIENT_REGISTRATION_TOKEN`. + +```yaml +mcp: + enabled: true + oauth: + audience: platform + jwtSeedSecretName: seqera-ai-secrets +``` -1. Check the health endpoint of the agent backend and mcp to verify connectivity: +MCP uses the `oauth-platform` Micronaut environment by default. This configures Platform as the OAuth provider and sets the expected audience to `platform`. - ```bash - curl -i https://ai-api.platform.example.com/health - curl -i https://mcp.platform.example.com/health - curl -i https://mcp.platform.example.com/service-info - ``` +## Configure the agent backend -## Connect the CLI to Seqera AI +The agent backend needs MySQL, Redis or Valkey, Bedrock access, MCP connectivity, and a stable token encryption key. + +```yaml +agent-backend: + enabled: true + + database: + host: mysql.example.com + name: agent_backend + username: agent_backend + existingSecretName: seqera-ai-secrets + existingSecretKey: AGENT_BACKEND_DB_PASSWORD + + redis: + host: redis.example.com + db: 0 + existingSecretName: seqera-ai-secrets + existingSecretKey: AGENT_BACKEND_REDIS_PASSWORD + + tokenEncryptionKeyExistingSecretName: seqera-ai-secrets + extraEnvVars: + - name: ORG_CREDITS_ENABLED + value: "false" +``` + +Use the `redis` values block for Redis-compatible services, including Valkey. + +The chart derives these runtime URLs from `global.*` values: + +| Environment variable | Derived from | +| --- | --- | +| `SEQERA_PLATFORM_URL` | `https://` | +| `SEQERA_PLATFORM_API_URL` | `http://:` | +| `SEQERA_MCP_URL` | `https:///mcp` | +| `ORG_CREDITS_ENABLED` | Set with `agent-backend.extraEnvVars`; use `false` for Enterprise deployments that do not use Seqera Cloud credits. | + +### Configure AWS Bedrock + +Configure Bedrock so inference and embeddings run in your AWS account. The chart enables Bedrock inference and Bedrock embeddings for Enterprise deployments. + +```yaml +agent-backend: + bedrockAssumeRoleArn: arn:aws:iam:::role/ + bedrockAnthropicModel: arn:aws:bedrock:::inference-profile/ + + embeddings: + bedrock: + region: eu-west-2 + modelId: amazon.titan-embed-text-v2:0 + dimensions: "1024" +``` + +Use `bedrockAssumeRoleArn` when the agent backend pod must assume a role for Bedrock inference, Bedrock embeddings, or AgentCore access. Leave it empty when the pod already has direct AWS credentials for the target account. + +### Configure AgentCore sandbox sessions + +If your deployment uses AWS Bedrock AgentCore for sandboxed execution, set the AgentCore runtime ARN: + +```yaml +agent-backend: + bedrockAgentCoreArn: arn:aws:bedrock-agentcore:::runtime/ +``` -Set `SEQERA_AI_BACKEND_URL` before running `seqera ai` so the CLI connects to the correct backend. +## Configure the portal web interface -Install the CLI first by following [Seqera AI CLI installation](../seqera-ai/installation.mdx), or install it directly with: +The portal web chart serves the browser interface and proxies requests to the agent backend. It derives Platform OIDC settings from the Platform domain and uses the fixed Enterprise client values required by the application. + +```yaml +portal-web: + enabled: true +``` + +Expose MCP, agent backend, and portal web through the chart ingress only if you use Kubernetes Ingress. If you use the Gateway API or another network layer, configure that layer instead. + +The chart sets: + +| Environment variable | Value | +| --- | --- | +| `SEQERA_PLATFORM_API_URL` | `http://:` | +| `SEQERA_PLATFORM_APP_URL` | `https://` | +| `SEQERA_AGENT_BACKEND_URL` | `https://` | +| `SEQERA_AUTH_DOMAIN` | `https:///api` | + +Set optional observability or feature flag variables with `portal-web.extraEnvVars` only if your Enterprise environment uses them. + +## Install or upgrade + +Run Helm with your Platform values and Seqera AI overrides: ```bash -npm install -g seqera +helm upgrade --install seqera oci://public.cr.seqera.io/charts/platform \ + --namespace seqera \ + --values values.yaml ``` -Use your Enterprise deployment: +After installation, verify the pods are ready: ```bash -export SEQERA_AUTH_DOMAIN=https://platform.example.com/api -export SEQERA_AUTH_CLI_CLIENT_ID=seqera_ai_cli -export SEQERA_AI_BACKEND_URL=https://ai.platform.example.com -seqera login -seqera ai +kubectl get pods -n seqera -l app.kubernetes.io/component=mcp +kubectl get pods -n seqera -l app.kubernetes.io/component=agent-backend +kubectl get pods -n seqera -l app.kubernetes.io/component=portal-web ``` -If your Enterprise deployment uses a different OAuth client ID for the CLI, replace `seqera_ai_cli` with the value configured for your installation. +## Verify the installation -If you are testing a development build of the CLI against the hosted production Seqera AI service, use the following settings instead: +Check the public endpoints: + +```bash +curl -i https://ai-api.platform.example.com/health +curl -i https://mcp.platform.example.com/health +curl -i https://mcp.platform.example.com/service-info +curl -I https://ai.platform.example.com +``` -| Variable | Purpose | Example value | -| --- | --- | --- | -| `SEQERA_AI_BACKEND_URL` | Seqera AI backend endpoint used by the CLI | `https://ai-api.seqera.io` | -| `SEQERA_AUTH_DOMAIN` | Platform API base URL used for browser-based login | `https://cloud.seqera.io/api` | -| `SEQERA_AUTH_CLI_CLIENT_ID` | OAuth client ID for the Seqera AI CLI | `seqera_ai_cli` | -| `SEQERA_ACCESS_TOKEN` | Platform personal access token used instead of browser login (`TOWER_ACCESS_TOKEN` also supported) | `` | +The agent backend `/health` endpoint returns `200 OK` when the service starts and required dependencies are reachable. The MCP server exposes `/health` for reachability and `/service-info` for server and protocol information. The portal web interface does not expose a matching `/service-info` endpoint; use the HTTP response and browser sign-in test to confirm it is reachable. -Use the OAuth login flow: +Open the portal web interface, for example `https://ai.platform.example.com`, and sign in with your Platform account. A successful login confirms that Platform OIDC, portal web, and the agent backend are connected. + +## Connect the CLI to Seqera AI + +Install the CLI from the official [`seqera` npm package](https://www.npmjs.com/package/seqera): + +```bash +npm install -g seqera +``` + +Point the CLI at your Enterprise deployment: ```bash -export SEQERA_AUTH_DOMAIN=https://cloud.seqera.io/api -export SEQERA_AUTH_CLI_CLIENT_ID=seqera_ai_cli -export SEQERA_AI_BACKEND_URL=https://ai-api.seqera.io +export SEQERA_AUTH_DOMAIN=https://platform.example.com/api +export SEQERA_AI_BACKEND_URL=https://ai-api.platform.example.com seqera ai ``` -Use a Platform personal access token instead of browser login: +Set `SEQERA_AUTH_CLI_CLIENT_ID` only if your deployment uses a CLI OAuth client ID other than the default `seqera_ai_cli`. + +For automated environments, use a Platform access token instead of browser login. Current CLI builds still require `SEQERA_AUTH_DOMAIN` so the CLI can target the correct Enterprise Platform authority. ```bash -export SEQERA_ACCESS_TOKEN= -export SEQERA_AI_BACKEND_URL=https://ai-api.seqera.io +export SEQERA_AUTH_DOMAIN=https://platform.example.com/api +export TOWER_ACCESS_TOKEN= +export SEQERA_AI_BACKEND_URL=https://ai-api.platform.example.com seqera ai ``` -You only need `SEQERA_AUTH_DOMAIN` and `SEQERA_AUTH_CLI_CLIENT_ID` when using the OAuth login flow. `SEQERA_ACCESS_TOKEN` (`TOWER_ACCESS_TOKEN`) is also supported. - -## Environment variables reference - -### Required - -| Variable | Description | -|----------|-------------| -| `SEQERA_PLATFORM_API_URL` | Platform API URL (e.g., `https://platform.example.com/api`) | -| `SEQERA_MCP_URL` | MCP server URL (e.g., `https://mcp.example.com/mcp`) | -| `ANTHROPIC_API_KEY` | API key for inference provider | -| `AGENT_BACKEND_DB_HOST` | MySQL database hostname | -| `AGENT_BACKEND_DB_NAME` | MySQL database name | -| `AGENT_BACKEND_DB_USER` | MySQL database username | -| `AGENT_BACKEND_DB_PASSWORD` | MySQL database password | -| `TOKEN_ENCRYPTION_KEY` | Fernet encryption key for encrypting sensitive tokens at rest. Also accepted as `AGENT_BACKEND_TOKEN_ENCRYPTION_KEY`. | - -### Optional - -| Variable | Description | Default | -|----------|-------------|---------| -| `SEQERA_PLATFORM_URL` | Platform UI URL for constructing links to runs and pipelines | Derived from platform domain | -| `AGENT_BACKEND_DB_PORT` | MySQL port | `3306` | -| `SESSION_TIMEOUT_SECONDS` | Session timeout | `86400` (24 hours) | -| `MAX_SESSIONS_PER_USER` | Max concurrent sessions per user | `10` | -| `SESSION_RETENTION_DAYS` | Days to retain session data | `14` | -| `LOG_LEVEL` | Application log level (`CRITICAL`, `ERROR`, `WARNING`, `INFO`, `DEBUG`) | `INFO` | -| `CORS_ORIGINS` | Allowed CORS origins (JSON array) | `["*"]` | - -## Helm values reference - -For the full list of configuration options, see the [agent-backend chart documentation](https://github.com/seqeralabs/helm-charts/tree/master/platform/charts/agent-backend). - -### Global - -| Value | Description | Default | -|-------|-------------|---------| -| `global.platformExternalDomain` | Domain where Seqera Platform listens | `example.com` | -| `global.agentBackendDomain` | Domain where the agent backend listens | `""` | -| `global.mcpDomain` | Domain where MCP server listens | `""` | - -### Agent backend - -| Value | Description | Default | -|-------|-------------|---------| -| `agentBackend.replicaCount` | Number of replicas | `1` | -| `agentBackend.image.registry` | Image registry | `cr.seqera.io` | -| `agentBackend.image.repository` | Image repository | `ai/agent-backend/backend` | -| `anthropicApiKeyExistingSecretName` | Existing secret containing `ANTHROPIC_API_KEY` | `""` | -| `tokenEncryptionKeyExistingSecretName` | Existing secret containing `TOKEN_ENCRYPTION_KEY` | `""` | - -### Database - -| Value | Description | Default | -|-------|-------------|---------| -| `database.host` | MySQL hostname | `""` | -| `database.port` | MySQL port | `3306` | -| `database.name` | MySQL database name | `""` | -| `database.username` | MySQL username | `""` | -| `database.existingSecretName` | Existing secret with DB password | `""` | -| `database.existingSecretKey` | Key in the secret | `DB_PASSWORD` | - -### Ingress - -| Value | Description | Default | -|-------|-------------|---------| -| `ingress.enabled` | Enable ingress | `false` | -| `ingress.path` | Ingress path (use `/*` for AWS ALB) | `/` | -| `ingress.ingressClassName` | Ingress class name | `""` | -| `ingress.annotations` | Ingress annotations | `{}` | -| `ingress.tls` | TLS configuration | `[]` | +Set `SEQERA_AUTH_CLI_CLIENT_ID` only for OAuth deployments that use a non-default CLI client ID. `SEQERA_ACCESS_TOKEN` and `TOWER_ACCESS_TOKEN` are supported for token-based authentication. + +## Usage and cost + +Enterprise deployments do not use Seqera Cloud credit balances or the Cloud credit request flow. Usage and inference costs are managed by your organization through AWS Bedrock. + +When `ORG_CREDITS_ENABLED=false` is set on the agent backend deployment, the CLI `/credits` command reports that usage is managed by your organization and directs users to their Seqera administrator. ## Security considerations -- **Token validation**: Every request validates the user's Platform token -- **User isolation**: Sessions are isolated by user ID -- **Credential passthrough**: MCP tools use the user's credentials for Platform operations -- **Token encryption**: Sensitive tokens (e.g., GitHub PATs) are encrypted at rest using Fernet symmetric encryption before storage in the database -- **No credential storage**: The agent backend does not store user credentials -- **TLS required**: All communication should use HTTPS +- Use HTTPS for every exposed hostname. +- Store all sensitive values in Kubernetes Secrets. +- Keep the agent backend Fernet token encryption key stable across upgrades. Changing it prevents the backend from decrypting existing encrypted values. +- For user-scoped operations, MCP uses the signed-in user's Platform token to call Platform APIs. Do not configure a shared administrator token for these calls. +- Use a separate MySQL database and user for the agent backend, even if they are hosted on the same MySQL instance as Platform. +- Enable Redis or Valkey TLS and MySQL TLS when your managed services require encrypted connections. -## Next steps +## Learn more -- See [Use cases](../seqera-ai/use-cases.md) for CLI usage. +- [Co-Scientist in the Seqera CLI](../seqera-ai/index.md): Co-Scientist documentation. +- [Seqera AI Helm example](https://github.com/seqeralabs/helm-charts/tree/master/charts/platform/examples/seqera-ai): Example Platform values for the Seqera AI subcharts. +- [Agent backend chart](https://github.com/seqeralabs/helm-charts/tree/master/charts/platform/charts/agent-backend): Full agent backend values reference. +- [MCP chart](https://github.com/seqeralabs/helm-charts/tree/master/charts/platform/charts/mcp): Full MCP values reference. +- [Portal web chart](https://github.com/seqeralabs/helm-charts/tree/master/charts/platform/charts/portal-web): Full portal web values reference. diff --git a/platform-enterprise_docs/seqera-ai/authentication.md b/platform-enterprise_docs/seqera-ai/authentication.md index ff11595d8..d118e7a56 100644 --- a/platform-enterprise_docs/seqera-ai/authentication.md +++ b/platform-enterprise_docs/seqera-ai/authentication.md @@ -1,21 +1,14 @@ --- title: "Authentication" -description: "Login, logout, and session management for Seqera AI CLI" -date: "15 Dec 2025" -tags: [seqera-ai, cli, authentication, login] +description: "Login, logout, and session management for Seqera CLI" +date created: "2025-12-15" +last updated: "2026-05-05" +tags: [seqera-ai, co-scientist, cli, authentication, login] --- -:::caution Seqera AI CLI is in beta -Seqera AI CLI is currently in beta. Features and commands may change as we continue to improve the product. -::: +Co-Scientist uses your Seqera Platform account for authentication. This page describes authentication concepts and step-by-step instructions for managing your sessions. -:::note -Seqera Cloud users receive $20 in free credits to get started with Seqera AI. [Contact us](https://seqera.io/platform/seqera-ai/request-credits/) for additional credits. -::: - -Seqera AI uses your Seqera Platform account for authentication. This page describes authentication concepts and step-by-step instructions for managing your sessions. - -## Authenticating Seqera AI +## Authenticating Co-Scientist ### Log in @@ -27,10 +20,10 @@ seqera login This will: -1. Open your default browser to the Seqera login page -1. Prompt you to sign in with your Seqera Platform credentials -1. Automatically capture the authentication token -1. Display a success message in your terminal +1. Open your default browser to the Seqera login page. +1. Prompt you to sign in with your Seqera Platform credentials. +1. Automatically capture the authentication token. +1. Display a success message in your terminal. ``` [Login] Starting Seqera CLI authentication... @@ -59,7 +52,36 @@ seqera ai When this environment variable is set, the CLI skips the OAuth login flow and uses the provided token directly. -For Enterprise backend connection settings and development-build examples, see [Install Seqera AI](../enterprise/install-seqera-ai.md#connect-the-cli-to-seqera-ai). +### Connect to an Enterprise backend + +Set the following environment variables before starting `seqera ai`: + +| Variable | Purpose | Example value | +| --- | --- | --- | +| `SEQERA_AI_BACKEND_URL` | Co-Scientist backend endpoint used by the CLI | `https://ai-api.platform.example.com` | +| `SEQERA_AUTH_DOMAIN` | OIDC authority base URL. The CLI fetches OpenID configuration from this URL and opens the discovered authorization endpoint in your browser. | `https://platform.example.com/api` | +| `SEQERA_AUTH_CLI_CLIENT_ID` | OAuth client ID for the Co-Scientist CLI | `seqera_ai_cli` | +| `TOWER_ACCESS_TOKEN` | Platform personal access token used instead of browser login | `` | + +Use the OAuth login flow: + +```bash +export SEQERA_AUTH_DOMAIN=https://platform.example.com/api +export SEQERA_AUTH_CLI_CLIENT_ID=seqera_ai_cli +export SEQERA_AI_BACKEND_URL=https://ai-api.platform.example.com +seqera ai +``` + +Use a Platform personal access token instead of browser login: + +```bash +export SEQERA_AUTH_DOMAIN=https://platform.example.com/api +export TOWER_ACCESS_TOKEN= +export SEQERA_AI_BACKEND_URL=https://ai-api.platform.example.com +seqera ai +``` + +Set `SEQERA_AUTH_CLI_CLIENT_ID` only for OAuth deployments that use a non-default CLI client ID. Current CLI builds still require `SEQERA_AUTH_DOMAIN` for Enterprise token-based authentication so the CLI can target the correct Platform authority. ### Log out @@ -73,7 +95,7 @@ This command revokes your current authentication token and removes locally store ## Organization management -Seqera AI CLI supports managing your organization selection for billing. Use the `seqera org` command to view and switch organizations. +Co-Scientist CLI supports managing your organization selection for billing. Use the `seqera org` command to view and switch organizations. **View current organization**: @@ -101,7 +123,7 @@ seqera org clear ## Token refresh -Seqera AI CLI automatically refreshes your authentication token when needed. You are not required to log in again unless: +Co-Scientist CLI automatically refreshes your authentication token when needed. You are not required to log in again unless: - You explicitly log out - Your refresh token expires (typically after extended inactivity) @@ -109,8 +131,9 @@ Seqera AI CLI automatically refreshes your authentication token when needed. You ## Learn more -- [Seqera AI CLI](index.md): Seqera AI CLI overview +- [Co-Scientist CLI](index.md): Co-Scientist CLI overview - [Installation](./installation.mdx): Detailed installation instructions - [Command approval](./command-approval.md): Control which commands run automatically -- [Use cases](./use-cases.md): Seqera AI use cases +- [Use cases](./use-cases.md): Co-Scientist use cases +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments - [Troubleshooting](../troubleshooting_and_faqs/seqera-ai.md): Troubleshoot common errors diff --git a/platform-enterprise_docs/seqera-ai/command-approval.md b/platform-enterprise_docs/seqera-ai/command-approval.md index a3b969c28..b304e1b91 100644 --- a/platform-enterprise_docs/seqera-ai/command-approval.md +++ b/platform-enterprise_docs/seqera-ai/command-approval.md @@ -1,20 +1,16 @@ --- title: "Command approval" -description: "Control which local commands require user approval in Seqera AI" -date: "15 Dec 2025" -tags: [seqera-ai, cli, approval, security] +description: "Control which local commands require user approval in Co-Scientist" +date created: "2025-12-15" +tags: [seqera-ai, co-scientist, cli, approval, security] --- -:::caution Seqera AI CLI is in beta -Seqera AI CLI is currently in beta. Features and commands may change as we continue to improve the product. -::: +Co-Scientist can execute local commands and edit files in your environment. This page explains approval modes that control which operations run automatically versus which require your permission, including dangerous commands, workspace boundaries, and best practices. -:::note -Seqera Cloud users receive $20 in free credits to get started with Seqera AI. [Contact us](https://seqera.io/platform/seqera-ai/request-credits/) for additional credits. +:::info +Starting a persistent task with `/goal ` switches the session to `full` approval mode automatically so Co-Scientist can continue working without repeated prompts. ::: -Seqera AI can execute local commands and edit files in your environment. This page explains approval modes that control which operations run automatically versus which require your permission, including dangerous commands, workspace boundaries, and best practices. - ## Approval prompts When a command requires approval, you will see a prompt similar to: @@ -38,7 +34,7 @@ You can: ## Approval modes -Approval modes control which local commands Seqera AI can execute automatically and which require your explicit approval. This provides a balance between convenience and safety when working with local files and commands. +Approval modes control which local commands Co-Scientist can execute automatically and which require your explicit approval. This provides a balance between convenience and safety when working with local files and commands. There are three approval modes: @@ -227,8 +223,9 @@ seqera ai ## Learn more -- [Seqera AI CLI](index.md): Seqera AI CLI overview +- [Co-Scientist CLI](index.md): Co-Scientist CLI overview - [Installation](./installation): Detailed installation instructions - [Authentication](./authentication): Log in, log out, and session management -- [Use cases](./use-cases.md): Seqera AI use cases +- [Use cases](./use-cases.md): Co-Scientist use cases +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments - [Troubleshooting](../troubleshooting_and_faqs/seqera-ai.md): Troubleshoot common errors diff --git a/platform-enterprise_docs/seqera-ai/get-started.md b/platform-enterprise_docs/seqera-ai/get-started.md index 19b364ce1..c47928cf6 100644 --- a/platform-enterprise_docs/seqera-ai/get-started.md +++ b/platform-enterprise_docs/seqera-ai/get-started.md @@ -1,23 +1,15 @@ --- title: "Get started" description: "AI-powered assistant for bioinformatics workflows and Seqera Platform" -date: "2025-12-16" -tags: [seqera-ai, cli, ai] +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, ai] --- -:::caution Seqera AI CLI is in beta -Seqera AI CLI is currently in beta. Features and commands may change as we continue to improve the product. -::: - -:::note -Seqera Cloud users receive $20 in free credits to get started with Seqera AI. [Contact us](https://seqera.io/platform/seqera-ai/request-credits/) for additional credits. -::: - ## Get started -To get started with Seqera AI: +To get started with Co-Scientist: -1. Install the Seqera AI CLI: +1. Install the Co-Scientist CLI: ```bash npm install -g seqera @@ -33,32 +25,55 @@ To get started with Seqera AI: See [Authentication](./authentication.md) for a comprehensive authentication guide. - If you are testing a development build of the CLI against the hosted production Seqera AI service, see [Install Seqera AI](../enterprise/install-seqera-ai.md#connect-the-cli-to-seqera-ai) for the required environment variables. + For Enterprise deployments, set `SEQERA_AI_BACKEND_URL` to your organization's agent backend before you start Co-Scientist. Your administrator should provide this URL; it maps to `global.agentBackendDomain` in the [Seqera AI install guide](../enterprise/install-seqera-ai.md). See [Authentication](./authentication.md#connect-to-an-enterprise-backend) for the full environment variable reference. -1. Start Seqera AI: +1. Start Co-Scientist: ```bash seqera ai ``` - You can also start with an initial query: +1. Open the command palette and review the available built-in commands and skills: - ```bash - seqera ai "list my pipelines" + ``` + /help + ``` + + You can also type `/` to open command autocomplete. + +1. Try build mode and plan mode: + + - Press `Shift+Tab` to switch between **build** and **plan** + - Check the current mode in the composer footer + - Run `/status` if you want a full status readout + + Example plan-mode prompt: + + ``` + Compare whether I should add FastQC or fastp as the first QC step in this RNA-seq pipeline, including the workflow changes each option would require + ``` + +1. Run your first workflow-focused prompt: + + ``` + /debug-last-run-on-seqera ``` -1. Run your first prompt: +1. Try goal mode for a longer task: ``` - /debug + /goal update this pipeline for AWS Batch and add nf-tests ``` - See [Use cases](./use-cases.md) for a comprehensive list of use cases. + See [Use cases](./use-cases.md) for more examples. ## Learn more - [Installation](./installation.mdx): Detailed installation instructions - [Authentication](./authentication.md): Log in, log out, and session management +- [Skills](./skills.md): Discover, create, and install skills +- [Modes](./modes.md): Work in build mode, plan mode, and goal mode - [Command approval](./command-approval.md): Control which commands run automatically -- [Use cases](./use-cases.md): Seqera AI CLI use cases +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments - [Troubleshooting](../troubleshooting_and_faqs/seqera-ai.md): Troubleshoot common errors diff --git a/platform-enterprise_docs/seqera-ai/index.md b/platform-enterprise_docs/seqera-ai/index.md index e85b39727..3bc5a340e 100644 --- a/platform-enterprise_docs/seqera-ai/index.md +++ b/platform-enterprise_docs/seqera-ai/index.md @@ -1,39 +1,48 @@ --- -title: "Seqera AI CLI" +title: "Co-Scientist in the Seqera CLI" description: "AI-powered assistant for bioinformatics workflows and Seqera Platform" -date: "2025-12-15" -tags: [seqera-ai, cli, ai] +date created: "2026-03-11" +last updated: "2026-04-29" +tags: [seqera-ai, co-scientist, cli, ai] --- -:::caution Seqera AI CLI is in beta -Seqera AI CLI is currently in beta. Features and commands may change as we continue to improve the product. -::: +Co-Scientist combines self-service bioinformatics, interactive intelligence, and autonomous execution into one unified experience. -:::note -Seqera Cloud users receive $20 in free credits to get started with Seqera AI. [Contact us](https://seqera.io/platform/seqera-ai/request-credits/) for additional credits. -::: +You can use the Seqera CLI to build, run, and manage bioinformatics workflows in an interactive terminal experience for working with Nextflow pipelines and Seqera Platform. -Seqera AI CLI is an intelligent command-line assistant that helps you build, run, and manage bioinformatics workflows. Powered by advanced AI, it provides an interactive terminal experience for working with Nextflow pipelines and Seqera Platform. - -Seqera AI has access to: +Co-Scientist has access to: - **Your Seqera Platform workspace**: View and manage workflows, pipelines, and data through your authenticated account - **Your local environment**: Execute commands and edit files in your working directory (with configurable approval controls) - **AI capabilities**: Natural language understanding, code generation, and intelligent suggestions -## Seqera AI features +Co-Scientist can load local project context, including instruction files such as `AGENTS.md`, `CLAUDE.md`, and `CONTEXT.md`, project skills, and files under `.seqera/docs` and `.seqera/memory`. Review local context files before starting a session in a repository that contains sensitive instructions or private run history. + +## Co-Scientist features + +### Skills + +Co-Scientist supports reusable skills for common workflows. Backend skills are exposed as slash commands in the `/` command palette and `/help`, and project or user `SKILL.md` files are discovered automatically from standard skill directories. ### Natural language interface Interact with Seqera Platform using plain English. Ask questions, launch workflows, and manage pipelines through conversational commands. +### Build and plan modes + +Switch between **build** and **plan** modes during an interactive session with `Shift+Tab`. Build mode is the default for execution and file changes, while plan mode is optimized for analysis, implementation planning, and read-only investigation. + +### Goal mode + +Use `/goal ` to set a persistent goal. Co-Scientist will keep working toward that goal across multiple model attempts until it is complete or the goal attempt limit is reached. + ### Workflow management Launch, monitor, and debug Nextflow workflows directly from your terminal. Get real-time status updates, view logs, and analyze run metrics. ### Pipeline development -Generate Nextflow configurations, create pipeline schemas, and convert scripts from other workflow languages (WDL, Snakemake) to Nextflow. +Generate Nextflow configurations, create pipeline schemas, and convert scripts from other workflow languages (WDL, R) to Nextflow. ### nf-core integration @@ -65,13 +74,21 @@ Full access to Platform capabilities including compute environments, datasets, d ### Projects -Organize a workspace into projects by applying Seqera Platform labels prefixed with `project_`. Each project scopes the pipelines, datasets, workflow runs, and chat context the AI sees, without needing a separate CRUD surface in Seqera AI. +Organize a workspace into projects by applying Seqera Platform labels prefixed with `project_`. Each project scopes the pipelines, datasets, workflow runs, and chat context the AI sees — without needing a separate CRUD surface in Co-Scientist. ## Learn more - [Installation](./installation.mdx): Detailed installation instructions - [Authentication](./authentication.md): Log in, log out, and session management +- [Skills](./skills.md): Discover, create, and install skills +- [Modes](./modes.md): Work in build mode, plan mode, and goal mode - [Command approval](./command-approval.md): Control which commands run automatically -- [Use cases](./use-cases.md): Seqera AI CLI use cases +- [Working with Claude Code](./skill-claude-code.md): Install Co-Scientist as a skill for Claude Code +- [Working with Codex](./skill-codex.md): Install Co-Scientist as a skill for Codex +- [Working with GitHub Copilot](./skill-github-copilot.md): Install Co-Scientist as a skill for GitHub Copilot +- [Working with other coding agents](./skill-other-agents.md): Install Co-Scientist for other coding agents +- [Code intelligence](./nextflow-lsp.md): Language-server support in Co-Scientist +- [Use cases](./use-cases.md): Seqera CLI use cases - [Projects](./projects.md): Organize workspace resources into projects using Platform labels +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments - [Troubleshooting](../troubleshooting_and_faqs/seqera-ai.md): Troubleshoot common errors diff --git a/platform-enterprise_docs/seqera-ai/installation.mdx b/platform-enterprise_docs/seqera-ai/installation.mdx index c1c414ace..73c061a33 100644 --- a/platform-enterprise_docs/seqera-ai/installation.mdx +++ b/platform-enterprise_docs/seqera-ai/installation.mdx @@ -1,133 +1,100 @@ --- title: "Installation" -description: "Install and configure Seqera AI CLI" -date: "15 Dec 2025" -tags: [seqera-ai, cli, installation] +description: "Install and configure Co-Scientist CLI" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, installation] --- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; +### Requirements -:::caution Seqera AI CLI is in beta -Seqera AI CLI is currently in beta. Features and commands may change as we continue to improve the product. -::: - -:::note -Seqera Cloud users receive $20 in free credits to get started with Seqera AI. [Contact us](https://seqera.io/platform/seqera-ai/request-credits/) for additional credits. -::: - -## Requirements +- Node.js 18 or later +- macOS, Linux, or Windows with WSL +- A user account on your Seqera Platform Enterprise deployment +- Network access to your Enterprise agent backend -- macOS (Apple Silicon or Intel) or Linux x64 -- A Seqera Platform account ([sign up for free](https://cloud.seqera.io)) -- An internet connection +### npm install -## Install - - - - -Install globally with npm: +Install the CLI from the official [`seqera` npm package](https://www.npmjs.com/package/seqera): ```bash npm install -g seqera ``` -Verify your installation: +Install the development build: ```bash -seqera --help +npm install -g seqera@dev ``` -:::tip -You can also install with yarn, pnpm, or bun: +Verify your installation: ```bash -yarn global add seqera -# or -pnpm add -g seqera -# or -bun add -g seqera +seqera --version ``` -::: - - +### npm update -Download the binary for your platform: +```bash +npm update -g seqera +``` -| Platform | Binary | -|----------|--------| -| macOS Apple Silicon | `seqera-darwin-arm64.tar.gz` | -| macOS Intel | `seqera-darwin-x64.tar.gz` | -| Linux x64 | `seqera-linux-x64.tar.gz` | +### Enterprise configuration -Extract and add to your PATH: +Set the agent backend URL before starting Co-Scientist: ```bash -# Example: macOS Apple Silicon -tar -xzf seqera-darwin-arm64.tar.gz -chmod +x seqera -sudo mv seqera /usr/local/bin/ +export SEQERA_AI_BACKEND_URL=https://ai-api.platform.example.com ``` -Verify your installation: +If your Enterprise deployment uses Platform OIDC, also set the OIDC authority base URL. The CLI fetches OpenID configuration from this URL and opens the discovered authorization endpoint in your browser: ```bash -seqera --help +export SEQERA_AUTH_DOMAIN=https://platform.example.com/api ``` - - +See [Authentication](./authentication.md#connect-to-an-enterprise-backend) for the complete environment variable reference and OAuth versus token-based examples. -## Upgrade +### Install agent integrations - - +Install Co-Scientist as a skill for your coding agent: ```bash -npm update -g seqera +seqera skill install ``` - - - -Download the latest binary for your platform and replace the existing binary. - - - - -## Developer configuration +Install directly into the current repository: -If you are testing a development build of the CLI against the hosted production Seqera AI service, set `SEQERA_AI_BACKEND_URL=https://ai-api.seqera.io` before running `seqera ai`. +```bash +seqera skill install --local +``` -See [Install Seqera AI](../enterprise/install-seqera-ai.md#connect-the-cli-to-seqera-ai) for the complete environment variable reference and OAuth versus token-based examples. +Check installed skills and update them after upgrading the CLI: -## Uninstall +```bash +seqera skill check --update +``` - - +:::info +If you use Co-Scientist as a skill for a coding agent, run `seqera skill check --update` after updating the CLI to keep your installed skills in sync with the current version. By default, this scans both local and global installations. Use `--global` or `--local` to narrow the scope. See [Working with Claude Code](./skill-claude-code.md), [Working with Codex](./skill-codex.md), [Working with GitHub Copilot](./skill-github-copilot.md), and [Working with other coding agents](./skill-other-agents.md). ```bash -npm uninstall -g seqera +seqera skill check --update ``` +::: - - - -Remove the `seqera` binary from your PATH: +### npm uninstall ```bash -sudo rm /usr/local/bin/seqera +npm uninstall -g seqera ``` - - - -## Learn more +### Learn more -- [Seqera AI CLI](index.md): Seqera AI CLI overview +- [Co-Scientist CLI](index.md): Co-Scientist CLI overview - [Authentication](./authentication.md): Login, logout, and session management +- [Skills](./skills.md): Discover, create, and install skills +- [Modes](./modes.md): Work in build mode, plan mode, and goal mode - [Command approval](./command-approval.md): Control which commands run automatically -- [Use cases](./use-cases.md): Seqera AI CLI use cases +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments - [Troubleshooting](../troubleshooting_and_faqs/seqera-ai.md): Troubleshoot common errors diff --git a/platform-enterprise_docs/seqera-ai/modes.md b/platform-enterprise_docs/seqera-ai/modes.md new file mode 100644 index 000000000..6baf439b3 --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/modes.md @@ -0,0 +1,112 @@ +--- +title: "Modes" +description: "Work in build mode, plan mode, and goal mode in Co-Scientist CLI" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, modes] +--- + +Co-Scientist includes **build mode**, **plan mode**, and **goal mode** so you can choose the right level of autonomy for each task. + +## Build mode + +Build mode is the default interactive mode. Co-Scientist can: + +- Read and search files +- Execute commands +- Edit or create files +- Carry out workflow changes directly in your workspace + +Use build mode for implementation work, debugging, code generation, and file edits. + +## Plan mode + +Plan mode is optimized for analysis and implementation planning. + +In plan mode, Co-Scientist focuses on: + +- Understanding the problem +- Comparing approaches and trade-offs +- Producing a step-by-step implementation plan +- Reading files and searching code for context + +Plan mode blocks write and execution tools, including: + +- `execute_bash_local` +- `write_file_local` +- `edit_file_local` +- `create_directory_local` + +If the assistant tries to use one of these tools, the request is rejected and the assistant is told to switch back to build mode. + +## Switch between build mode and plan mode + +Toggle modes during a session with `Shift+Tab`. + +You can also: + +- Check the current mode in the composer footer. +- Run `/status` to view the current mode alongside session and LSP status. +- Use `/help` to see mode-aware command guidance. + +## Goal mode + +Goal mode is a persistent workflow for longer tasks. Set a goal with: + +```bash +/goal +``` + +When goal mode is active, Co-Scientist: + +- Keeps working toward the same objective over multiple model attempts. +- Automatically continues if more work is needed. +- Stops when the goal is complete or the goal attempt limit is reached. +- Switches approval mode to `full` so work can continue without repeated prompts. + +Goal mode commands: + +```bash +/goal +/goal off +``` + +Run `/goal` without arguments to inspect the current goal. Run `/goal off` to disable goal mode. + +Co-Scientist currently gives goal mode up to **3 model attempts** before it stops and asks you to start a new goal. + +## Keyboard shortcuts + +| Shortcut | Action | +|----------|--------| +| `Shift+Tab` | Toggle between build mode and plan mode. | +| `Ctrl+Enter` | If your terminal supports it, interrupt the current response and send a queued follow-up immediately. | +| `Esc` | Clear a queued follow-up or interrupt the current response. | + +## Examples + +### Plan mode + +```text +Compare whether I should add FastQC or fastp as the first QC step in this RNA-seq pipeline, including the workflow changes each option would require +``` + +```text +Inspect this repository and outline the changes needed for Seqera Platform deployment +``` + +### Goal mode + +```text +/goal migrate this pipeline to DSL2 and add nf-tests +``` + +```text +/goal update this workflow for AWS Batch and verify the config +``` + +## Learn more + +- [Skills](./skills.md): Discover, create, and install skills +- [Command approval](./command-approval.md): Control which commands run automatically +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/nextflow-lsp.md b/platform-enterprise_docs/seqera-ai/nextflow-lsp.md new file mode 100644 index 000000000..83b49a26b --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/nextflow-lsp.md @@ -0,0 +1,39 @@ +--- +title: "Language server" +description: "Use language-server code intelligence features in Co-Scientist" +date created: "2026-02-26" +tags: [seqera-ai, co-scientist, cli, lsp, code-intelligence] +--- + +When you ask Co-Scientist to help with code in your workspace, it uses language server (LSP) context to provide: + +- Explanations for errors and warnings in your code. +- Context-aware completions and suggestions. +- Better navigation and understanding across project files. + +For Nextflow projects, this includes diagnostics and code intelligence for scripts and config files. + +### Language support + +| LSP Server | Extensions | Requirements | +|------------|------------|--------------| +| Nextflow | `.nf`, `.config` | Java 17+ installed | +| Python (Pyright) | `.py`, `.pyi` | Auto-installs | +| R | `.r`, `.R`, `.rmd`, `.Rmd` | R runtime installed | + +LSP servers automatically start when you work with files that match these extensions. + +### Workspace detection + +Co-Scientist detects the relevant language context from your active workspace and applies matching intelligence automatically. + +This means you can move between Nextflow, Python, and R files in the same project and get language-aware assistance without manual setup. + +See [Nextflow Language Server](https://github.com/nextflow-io/language-server) for advanced configuration details. + +### Learn more + +- [Co-Scientist CLI](./index.md): Co-Scientist overview +- [Get started](./get-started.md): Start using Co-Scientist +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/projects.md b/platform-enterprise_docs/seqera-ai/projects.md index b2455254a..c4a9dabcb 100644 --- a/platform-enterprise_docs/seqera-ai/projects.md +++ b/platform-enterprise_docs/seqera-ai/projects.md @@ -2,28 +2,28 @@ title: "Projects" description: "Organize workspace resources into projects using Seqera Platform labels" date: "2026-04-22" -tags: [seqera-ai, projects, labels] +tags: [seqera-ai, co-scientist, projects, labels] --- -:::caution Seqera AI is in beta -Seqera AI is currently in beta. Features and behavior may change as we continue to improve the product. +:::caution Co-Scientist is in beta +Co-Scientist is currently in beta. Features and behavior may change as we continue to improve the product. ::: -Projects in Seqera AI group the pipelines, datasets, and workflow runs that belong to a single piece of work. You can view and chat about them without the noise of the rest of the workspace. +Projects in Co-Scientist group the pipelines, datasets, and workflow runs that belong to a single piece of work, so you can view and chat about them without the noise of the rest of the workspace. -Projects are not created inside Seqera AI. They are derived from **workspace labels in Seqera Platform** whose names start with `project_`. Each matching label surfaces in Seqera AI as a separate project scope, with the Platform label acting as the source of truth for membership. +Projects are not created inside Co-Scientist. They are derived from **workspace labels in Seqera Platform** whose names start with `project_`. Each matching label surfaces in Co-Scientist as a separate project scope, with the Platform label acting as the source of truth for membership. :::note -Projects are part of the Seqera AI **portal web interface**. The portal must be deployed alongside Seqera AI in your Enterprise installation. See [Install Seqera AI](../enterprise/install-seqera-ai.md) for more information. +Projects are part of the Co-Scientist web interface. The portal must be deployed alongside Seqera AI in your Enterprise installation. See [Install Seqera AI](../enterprise/install-seqera-ai.md) for more information. ::: ## How projects are derived -When you open a workspace in the Seqera AI portal web interface: +When you open a workspace in the Co-Scientist web interface: -1. Seqera AI reads the list of workspace labels from the Seqera Platform API. +1. Co-Scientist reads the list of workspace labels from the Seqera Platform API. 2. Any label whose name starts with `project_` becomes a project. -3. An **Entire workspace** view is included alongside your projects. You can see every resource in the workspace. +3. An **Entire workspace** view is always included alongside your projects so you can see every resource in the workspace. 4. Pipelines, datasets, and workflow runs are scoped to a project by matching on its `project_*` label. Because membership lives on the Platform label, adding or removing a resource from a project is the same action as applying or removing the label in Platform. @@ -38,17 +38,17 @@ To create a project: - `project_variant_calling` - `project_chip_seq` 3. Apply the label to the pipelines and datasets that belong to the project. -4. Open Seqera AI. The new project appears on the **Projects** page and in the chat project selector on the next page load. +4. Open Co-Scientist. The new project appears on the **Projects** page and in the chat project selector on the next page load. :::tip -Create the label in workspace settings **before** applying it to resources. This ensures the label has a Platform-assigned ID, which Seqera AI needs to auto-attach the label when you upload new datasets into the project. +Create the label in workspace settings **before** applying it to resources. This ensures the label has a Platform-assigned ID, which Co-Scientist needs to auto-attach the label when you upload new datasets into the project. ::: ## Display names -Seqera AI strips the `project_` prefix to produce the display name shown in the portal: +Co-Scientist strips the `project_` prefix to produce the display name shown in the web interface: -| Platform label | Seqera AI display name | +| Platform label | Co-Scientist display name | |-----------------------|-------------------------| | `project_rnaseq` | Project rnaseq | | `project_wgs` | Project wgs | @@ -69,7 +69,7 @@ Once a `project_*` label exists in the workspace and is applied to at least one ### A resource carries a `project_*` label that isn't in the workspace label list -If a pipeline has a `project_*` label but the label has not been created in workspace settings, Seqera AI still surfaces the project, inferred from the pipeline. In this case: +If a pipeline has a `project_*` label but the label has not been created in workspace settings, Co-Scientist still surfaces the project, inferred from the pipeline. In this case: - The project has no Platform-assigned label ID. - Dataset uploads into the project cannot auto-attach the label. @@ -88,5 +88,7 @@ Ask a workspace admin to create the first `project_*` label to enable projects f ## Learn more -- [Install Seqera AI](../enterprise/install-seqera-ai.md): Deploy Seqera AI and the portal web interface in Enterprise -- [Get started with Seqera AI](./get-started.md): Install and authenticate the Seqera AI CLI +- [Install Seqera AI](../enterprise/install-seqera-ai.md): Deploy the agent backend, MCP server, and web interface in Enterprise +- [Seqera Platform labels](https://docs.seqera.io/platform-cloud/labels/overview): Create and manage workspace labels +- [Get started with Co-Scientist](./get-started.md): Install and authenticate Co-Scientist +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/skill-claude-code.md b/platform-enterprise_docs/seqera-ai/skill-claude-code.md new file mode 100644 index 000000000..d00af7017 --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/skill-claude-code.md @@ -0,0 +1,91 @@ +--- +title: "Working with Claude Code" +description: "Install and maintain the Co-Scientist skill for Claude Code" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, skills, claude-code] +--- + +The `seqera skill` command installs a skill file that enables [Claude Code](https://claude.ai/code) to use Co-Scientist as a subagent. Once installed, Claude Code can invoke Co-Scientist directly to manage workflows, build containers, query nf-core modules, and more without leaving your environment. + +### `seqera skill install` + +Launch the interactive installer: + +```bash +seqera skill install +``` + +Install to the standard Claude Code location: + +```bash +seqera skill install --path .claude/skills/ +``` + +Install into the current repository root: + +```bash +seqera skill install --local +``` + +Or install to your home directory: + +```bash +seqera skill install --global +``` + +You can also auto-detect and update an existing installation: + +```bash +seqera skill install --detect +``` + +### Usage + +```bash +seqera skill install [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--local` | `-l` | Install to repo root | +| `--path ` | `-p` | Install to a custom path (relative or absolute) | +| `--global` | `-g` | Install to home directory | +| `--detect` | `-d` | Auto-detect an existing installation and update it | + +### `seqera skill check` + +Verify that your installed skill matches your current CLI version: + +```bash +seqera skill check +``` + +Update automatically if needed: + +```bash +seqera skill check --update +``` + +### Usage + +```bash +seqera skill check [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--update` | `-u` | Automatically update outdated skills | +| `--global` | | Check only global installations | +| `--local` | | Check only local (repository) installations | + +### Learn more + +- [Skills](./skills.md): Discover, create, and install skills +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Code intelligence](./nextflow-lsp.md): Language-aware coding support +- [Installation](./installation.mdx): Detailed installation instructions +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/skill-codex.md b/platform-enterprise_docs/seqera-ai/skill-codex.md new file mode 100644 index 000000000..e6b80d87a --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/skill-codex.md @@ -0,0 +1,91 @@ +--- +title: "Working with Codex" +description: "Install and maintain the Co-Scientist skill for Codex" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, skills, codex] +--- + +The `seqera skill` command installs a skill file that enables [Codex](https://openai.com/codex) to use Co-Scientist as a subagent. Once installed, Codex can invoke Co-Scientist directly to manage workflows, build containers, query nf-core modules, and more without leaving your environment. + +### `seqera skill install` + +Launch the interactive installer: + +```bash +seqera skill install +``` + +Install to your project `AGENTS.md` path: + +```bash +seqera skill install --path AGENTS.md +``` + +Install into the current repository root and let the CLI select the Codex format automatically: + +```bash +seqera skill install --local +``` + +Or install to your home directory: + +```bash +seqera skill install --global +``` + +You can also auto-detect and update an existing installation: + +```bash +seqera skill install --detect +``` + +### Usage + +```bash +seqera skill install [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--local` | `-l` | Install to repo root | +| `--path ` | `-p` | Install to a custom path (relative or absolute) | +| `--global` | `-g` | Install to home directory | +| `--detect` | `-d` | Auto-detect an existing installation and update it | + +### `seqera skill check` + +Verify that your installed skill matches your current CLI version: + +```bash +seqera skill check +``` + +Update automatically if needed: + +```bash +seqera skill check --update +``` + +### Usage + +```bash +seqera skill check [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--update` | `-u` | Automatically update outdated skills | +| `--global` | | Check only global installations | +| `--local` | | Check only local (repository) installations | + +### Learn more + +- [Skills](./skills.md): Discover, create, and install skills +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Code intelligence](./nextflow-lsp.md): Language-aware coding support +- [Installation](./installation.mdx): Detailed installation instructions +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/skill-github-copilot.md b/platform-enterprise_docs/seqera-ai/skill-github-copilot.md new file mode 100644 index 000000000..ef3ef404d --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/skill-github-copilot.md @@ -0,0 +1,91 @@ +--- +title: "Working with GitHub Copilot" +description: "Install and maintain the Co-Scientist skill for GitHub Copilot" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, skills, github-copilot] +--- + +The `seqera skill` command installs a skill file that enables [GitHub Copilot](https://github.com/features/copilot) to use Co-Scientist as a subagent. Once installed, GitHub Copilot can invoke Co-Scientist directly to manage workflows, build containers, query nf-core modules, and more without leaving your environment. + +### `seqera skill install` + +Launch the interactive installer: + +```bash +seqera skill install +``` + +Install to the standard Copilot instructions file: + +```bash +seqera skill install --path .github/copilot-instructions.md +``` + +Install into the current repository root: + +```bash +seqera skill install --local +``` + +Or install to your home directory: + +```bash +seqera skill install --global +``` + +You can also auto-detect and update an existing installation: + +```bash +seqera skill install --detect +``` + +### Usage + +```bash +seqera skill install [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--local` | `-l` | Install to repo root | +| `--path ` | `-p` | Install to a custom path (relative or absolute) | +| `--global` | `-g` | Install to home directory | +| `--detect` | `-d` | Auto-detect an existing installation and update it | + +### `seqera skill check` + +Verify that your installed skill matches your current CLI version: + +```bash +seqera skill check +``` + +Update automatically if needed: + +```bash +seqera skill check --update +``` + +### Usage + +```bash +seqera skill check [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--update` | `-u` | Automatically update outdated skills | +| `--global` | | Check only global installations | +| `--local` | | Check only local (repository) installations | + +### Learn more + +- [Skills](./skills.md): Discover, create, and install skills +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Code intelligence](./nextflow-lsp.md): Language-aware coding support +- [Installation](./installation.mdx): Detailed installation instructions +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/skill-other-agents.md b/platform-enterprise_docs/seqera-ai/skill-other-agents.md new file mode 100644 index 000000000..6fc3f6efe --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/skill-other-agents.md @@ -0,0 +1,100 @@ +--- +title: "Working with other coding agents" +description: "Install and maintain the Co-Scientist skill for other coding agents" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, skills, coding-agents] +--- + +The `seqera skill` command installs a skill file that enables coding agents such as [Cursor](https://www.cursor.com/), [OpenCode](https://opencode.ai/), [Pi](https://github.com/badlogic/pi-mono), and [Windsurf](https://windsurf.com/) to use Co-Scientist as a subagent. Once installed, these agents can invoke Co-Scientist directly to manage workflows, build containers, query nf-core modules, and more without leaving your environment. + +### Supported agents + +| Agent | Format | +|-------|--------| +| [Cursor](https://www.cursor.com/) | `.cursor/rules/` | +| [OpenCode](https://opencode.ai/) | `.opencode/` | +| [Pi](https://github.com/badlogic/pi-mono) | `.pi/` | +| [Windsurf](https://windsurf.com/) | `.windsurf/rules/` | + +### `seqera skill install` + +Launch the interactive installer: + +```bash +seqera skill install +``` + +Install to a specific agent path: + +```bash +seqera skill install --path +``` + +Install into the current repository root: + +```bash +seqera skill install --local +``` + +Or install to your home directory: + +```bash +seqera skill install --global +``` + +You can also auto-detect and update an existing installation: + +```bash +seqera skill install --detect +``` + +### Usage + +```bash +seqera skill install [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--local` | `-l` | Install to repo root | +| `--path ` | `-p` | Install to a custom path (relative or absolute) | +| `--global` | `-g` | Install to home directory | +| `--detect` | `-d` | Auto-detect an existing installation and update it | + +### `seqera skill check` + +Verify that your installed skill matches your current CLI version: + +```bash +seqera skill check +``` + +Update automatically if needed: + +```bash +seqera skill check --update +``` + +### Usage + +```bash +seqera skill check [OPTIONS] +``` + +### Options + +| Option | Short | Description | +|--------|-------|-------------| +| `--update` | `-u` | Automatically update outdated skills | +| `--global` | | Check only global installations | +| `--local` | | Check only local (repository) installations | + +### Learn more + +- [Skills](./skills.md): Discover, create, and install skills +- [Use cases](./use-cases.md): Co-Scientist CLI use cases +- [Code intelligence](./nextflow-lsp.md): Language-aware coding support +- [Installation](./installation.mdx): Detailed installation instructions +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/skills.md b/platform-enterprise_docs/seqera-ai/skills.md new file mode 100644 index 000000000..8403dd76a --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/skills.md @@ -0,0 +1,156 @@ +--- +title: "Skills" +description: "Discover, create, and install skills in Co-Scientist CLI" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, skills] +--- + +Skills are reusable instruction sets that extend Co-Scientist with domain-specific workflows, prompts, and operating guidance. + +Co-Scientist supports two skill workflows: + +- **CLI skills**: `SKILL.md` files discovered from project and user skill directories and sent to the backend as session context +- **Agent integrations**: skill files installed by `seqera skill install` so other coding agents can invoke Co-Scientist as a subagent + +## Use skills in the CLI + +When you start `seqera ai`, the CLI discovers available skills automatically. Backend-provided skills are also exposed as slash commands in the `/` command palette and `/help`. + +You can: + +- Type `/` to browse built-in commands and backend skills +- Run `/help` to see commands and skill descriptions in the terminal +- Add project-specific `SKILL.md` files so Co-Scientist starts each session with the right context + +## Built-in skills + +Your Co-Scientist deployment can expose built-in skills as slash commands. These appear in the `/` command palette and in `/help`. + +The CLI includes the following built-in skills by default: + +| Command | Description | +|---------|-------------| +| `/nextflow-config` | Generate and explain Nextflow configuration files | +| `/nextflow-schema` | Generate `nextflow_schema.json` and sample sheet schema files | +| `/debug-local-run` | Debug a local Nextflow pipeline run using `.nextflow.log`, work directories, and related artifacts | +| `/debug-last-run-on-seqera` | Debug the last pipeline run on Seqera Platform | +| `/convert-jupyter-notebook` | Convert Jupyter notebooks to Nextflow pipelines | +| `/convert-python-script` | Convert Python scripts, including standalone scripts and Snakemake-style logic, to Nextflow | +| `/convert-r-script` | Convert R scripts to Nextflow pipelines | +| `/fix-strict-syntax` | Fix Nextflow strict syntax errors and help migrate pipelines to the v2 parser | +| `/nf-aggregate` | Aggregate metrics from Nextflow runs on Seqera Platform using the `nf-aggregate` pipeline | +| `/nf-data-lineage` | Explore Nextflow data lineage to trace which inputs and processes produced a result | +| `/nf-pipeline-structure` | Analyze a local Nextflow pipeline structure, including processes, workflows, modules, and channel flow | +| `/nf-run-history` | Analyze local Nextflow run history and summarize recent activity, progress, and recurring issues | +| `/nf-schema-migration` | Migrate Nextflow pipelines from `nf-validation` to `nf-schema` v2 | +| `/seqera-mcp` | Access Seqera Platform through MCP tools for structured, validated operations | +| `/seqera-platform-api` | Query and manipulate Seqera Platform resources directly through the REST API | +| `/seqerakit` | Write `seqerakit` YAML configuration for automating Seqera Platform setup | +| `/simplify` | Review changed code for reuse, quality, and efficiency, then clean up issues found | + +:::note +The exact built-in skills available in your environment may vary by deployment and release. Use `/help` or type `/` in the CLI to see the current list. +::: + +## Skill format + +Each skill lives in its own directory and includes a `SKILL.md` file with YAML frontmatter: + +```text +my-skill/ + SKILL.md + references/ +``` + +```markdown +--- +name: my-skill +description: Short description of what this skill does +--- + +Detailed instructions, examples, and guidelines. +``` + +`name` and `description` are required. Skills missing either field are skipped. + +## Discovery directories + +Co-Scientist searches these directories in order. The first directory to register a skill name takes precedence, and later skills with the same name are ignored. + +| Priority | Path | Scope | +|----------|------|-------| +| 1 | `/.agents/skills/` | project | +| 2 | `/.seqera/skills/` | project | +| 3 | `~/.agents/skills/` | user | +| 4 | `~/.seqera/skills/` | user | +| 5 | `~/.config/agents/skills/` | user | +| 6 | `~/.config/seqera/skills/` | user | + +Project skills take priority over user skills, so you can override a global skill with a repository-specific version. + +### Cross-agent compatibility + +`.agents/skills/` follows the [Agent Skills](https://agentskills.io) convention, which makes skills portable across coding agents. `.seqera/skills/` is Seqera-specific. + +## Install skills into Co-Scientist + +You can add skills by creating the directory structure manually or by installing them from the [Agent Skills](https://agentskills.io) ecosystem: + +```bash +npx skills add https://github.com/vercel-labs/agent-skills --skill vercel-react-best-practices +``` + +After adding a skill, restart `seqera ai` so the new skill is loaded into the session. + +## Install Co-Scientist into coding agents + +Use `seqera skill install` to install Co-Scientist as a skill or instruction file for another agent: + +```bash +seqera skill install +``` + +Common installation flows: + +```bash +seqera skill install --local +seqera skill install --global +seqera skill install --detect +``` + +Supported agents include: + +| Agent | Format | +| --- | --- | +| [Claude Code](https://claude.ai/code) | `.claude/skills/` | +| [Codex](https://openai.com/codex) | `AGENTS.md` | +| [Cursor](https://www.cursor.com/) | `.cursor/rules/` | +| [GitHub Copilot](https://github.com/features/copilot) | `.github/copilot-instructions.md` | +| [OpenCode](https://opencode.ai/) | `.opencode/` | +| [Pi](https://github.com/badlogic/pi-mono) | `.pi/` | +| [Windsurf](https://windsurf.com/) | `.windsurf/rules/` | + +Verify installed agent integrations with: + +```bash +seqera skill check +``` + +Update outdated installations automatically: + +```bash +seqera skill check --update +``` + +## Payload limits + +To keep session payloads small, Co-Scientist caps discovered skill context at **5 KB**. The total session payload cap is **20 KB**. + +## Learn more + +- [Modes](./modes.md): Work in build mode, plan mode, and goal mode +- [Working with Claude Code](./skill-claude-code.md): Install Co-Scientist as a skill for Claude Code +- [Working with Codex](./skill-codex.md): Install Co-Scientist as a skill for Codex +- [Working with GitHub Copilot](./skill-github-copilot.md): Install Co-Scientist as a skill for GitHub Copilot +- [Working with other coding agents](./skill-other-agents.md): Install Co-Scientist for other coding agents +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments diff --git a/platform-enterprise_docs/seqera-ai/usage-and-cost.md b/platform-enterprise_docs/seqera-ai/usage-and-cost.md new file mode 100644 index 000000000..8d03d0f40 --- /dev/null +++ b/platform-enterprise_docs/seqera-ai/usage-and-cost.md @@ -0,0 +1,29 @@ +--- +title: "Usage and cost" +description: "Understand Co-Scientist usage and inference costs in Seqera Platform Enterprise" +date created: "2026-05-04" +tags: [seqera-ai, co-scientist, enterprise, ai] +--- + +Co-Scientist in Seqera Platform Enterprise runs in your self-hosted environment and uses AWS Bedrock for inference. + +Enterprise deployments do not use Seqera Cloud credit balances or the Cloud credit request flow. Instead, your organization manages inference access, limits, and costs through AWS Bedrock. + +## What users see + +In Enterprise deployments, Co-Scientist does not enforce Seqera Cloud credit balances. If your session is blocked because of usage limits, contact your Seqera Platform administrator. The administrator can verify the agent backend configuration and AWS Bedrock account. + +## What administrators manage + +Administrators should manage: + +- AWS Bedrock model access, inference profiles, quotas, and IAM roles. +- Any organization-specific policies for Co-Scientist availability. + +For deployment configuration, see [Seqera AI](../enterprise/install-seqera-ai.md). + +## Learn more + +- [Co-Scientist in the Seqera CLI](./index.md): Co-Scientist overview +- [Authentication](./authentication.md): Log in, log out, and session management +- [Use cases](./use-cases.md): Co-Scientist CLI use cases diff --git a/platform-enterprise_docs/seqera-ai/use-cases.md b/platform-enterprise_docs/seqera-ai/use-cases.md index 625beec0d..a939e6080 100644 --- a/platform-enterprise_docs/seqera-ai/use-cases.md +++ b/platform-enterprise_docs/seqera-ai/use-cases.md @@ -1,26 +1,19 @@ --- title: "Use cases" -description: "Learn how to use Seqera AI CLI for bioinformatics workflows, pipeline development, and data management" -date: "2025-12-15" -tags: [seqera-ai, cli, ai, use cases] +description: "Learn how to use Co-Scientist CLI for bioinformatics workflows, pipeline development, and data management" +date created: "2026-03-11" +tags: [seqera-ai, co-scientist, cli, ai, use cases] --- -:::caution Seqera AI CLI is in beta -Seqera AI CLI is currently in beta. Features and commands may change as we continue to improve the product. -::: - -:::note -Seqera Cloud users receive $20 in free credits to get started with Seqera AI. [Contact us](https://seqera.io/platform/seqera-ai/request-credits/) for additional credits. -::: - -Seqera AI is an intelligent command-line assistant that helps you build, run, and manage bioinformatics workflows. The following sections describe several common use cases. +Co-Scientist is an intelligent command-line assistant that helps you build, run, and manage bioinformatics workflows. The following sections describe several common use cases. ## Work with Nextflow -Seqera AI helps you develop, debug, and understand Nextflow pipelines with AI-powered analysis and code generation. +Co-Scientist helps you develop, debug, and understand Nextflow pipelines with AI-powered analysis and code generation. - -![Use Seqera AI CLI to debug Nextflow pipeline scripts](./_images/pipeline-debug.gif) +
+ +
**Working with Nextflow** @@ -35,12 +28,20 @@ Seqera AI helps you develop, debug, and understand Nextflow pipelines with AI-po > What processes are defined in this pipeline? ``` -**Generate a `nextflow.config` file**: +``` +> /nf-pipeline-structure +``` + +**Use `/nextflow-config` to generate and explain Nextflow configuration files**: ``` -> /config +> /nextflow-config ``` +
+ +
+ **Debug your pipeline**: ``` @@ -51,12 +52,27 @@ Seqera AI helps you develop, debug, and understand Nextflow pipelines with AI-po > Why is my pipeline failing? ``` -**Generate a schema (`nextflow_schema.json`) file**: +**Review local execution history**: + +``` +> /nf-run-history +``` + +**Trace output provenance with data lineage**: + +``` +> /nf-data-lineage +``` + +**Use `/nextflow-schema` to generate `nextflow_schema.json` and sample sheet schema files**: ``` -> /schema +> /nextflow-schema ``` +
+ +
**Convert scripts to Nextflow**: @@ -64,14 +80,66 @@ Seqera AI helps you develop, debug, and understand Nextflow pipelines with AI-po > /convert-python-script ``` +
+ +
+ +**Fix strict syntax issues**: + +``` +> /fix-strict-syntax +``` + +**Migrate old schema definitions**: + +``` +> /nf-schema-migration +``` + +
+ +## Work with Seqera Platform + +Use Seqera Platform capabilities to run and manage workflows at scale with AI assistance. + +
+ +
+ +
+**Working with Seqera Platform** + +**List your workflows**: + +``` +> List my recent workflows +``` + +**Launch a pipeline**: + +``` +> Launch the nf-core/rnaseq pipeline with the test profile +``` + +**Debug failed runs**: + +``` +> Why did my last workflow fail? +``` + +``` +> Get the logs for the failed task in my last run +``` +
## Build containers with Wave -Seqera AI can create containerized environments using Wave, without requiring you to write Dockerfiles. +Co-Scientist can create containerized environments using Wave, without the need to write Dockerfiles. - -![Use Seqera AI CLI to build containers with Wave](./_images/building-wave-container.gif) +
+ +
**Building containers with Wave** @@ -131,11 +199,85 @@ seqera ai -s seqera ai --approval-mode full ``` +**Switch between build mode and plan mode**: + +- Press `Shift+Tab` in the composer +- Check the current mode in the composer footer +- Use `/status` if you want a full status readout + +**Inspect available built-in commands and skills**: + +``` +/help +``` + +
+ +## Plan work before you edit + +Use **plan mode** when you want analysis and a concrete implementation plan before making changes. + +
+**Planning in plan mode** + +**Compare implementation strategies**: + +``` +> Compare whether I should add FastQC or fastp as the first QC step in this RNA-seq pipeline, including the workflow changes each option would require +``` + +**Ask for a step-by-step rollout plan**: + +``` +> Plan the work to add GPU support to this pipeline +``` + +**Review a codebase without modifying it**: + +``` +> Inspect this repository and outline the changes needed for Seqera Platform deployment +``` + +:::note +Plan mode is designed for read-only analysis. To execute commands, edit files, or write code, switch back to build mode with `Shift+Tab`. +::: + +
+ +## Use goal mode for longer tasks + +Use **goal mode** when you want Co-Scientist to keep working toward a task over multiple model attempts. + +
+**Working in goal mode** + +**Start a persistent task**: + +``` +/goal migrate this pipeline to DSL2 and add nf-tests +``` + +**Check the active goal**: + +``` +/goal +``` + +**Disable goal mode**: + +``` +/goal off +``` + +:::note +Goal mode automatically switches command approval to `full` so the assistant can keep making progress. See [Command approval](./command-approval.md) for details. +::: +
## Exit the assistant -End your Seqera AI session when done. +End your Co-Scientist session when done.
**Exit the assistant** @@ -153,7 +295,7 @@ Your conversation history is preserved. You can resume a session later with `seq ## Use slash commands -Seqera AI includes built-in slash commands for common workflows. +Co-Scientist includes built-in slash commands for common workflows.
**TUI commands** @@ -170,7 +312,7 @@ These commands are handled locally by the CLI: | `/org` | Show current organization | | `/lsp` | Show LSP server status | | `/status` | Show system status | -| `/credits` | Show credit balance and usage | +| `/credits` | Show Enterprise usage ownership and administrator contact guidance | | `/approval` | Show or set approval mode | | `/feedback` | Open feedback form | | `/help-community` | Open community help | @@ -185,23 +327,65 @@ These commands are sent to the AI backend for processing: | Command | Description | |---------|-------------| -| `/config` | Generate a nextflow.config file | -| `/schema` | Generate a Nextflow schema | +| `/help` | Show available commands and skills | +| `/status` | Show current mode, LSP, organization, and session status | +| `/sessions` | Browse and switch sessions | +| `/goal` | Set, inspect, or disable a persistent goal | +| `/credits` | Show Enterprise usage ownership and administrator contact guidance | +| `/update` | Check for CLI updates | +| `/nextflow-config` | Generate and explain Nextflow configuration files | +| `/nextflow-schema` | Generate `nextflow_schema.json` and sample sheet schema files | | `/debug` | Run nextflow lint and preview | -| `/debug-last-run` | Debug the last local run | +| `/debug-local-run` | Debug a local Nextflow pipeline run | | `/debug-last-run-on-seqera` | Debug the last Platform run | | `/migrate-from-wdl` | Convert WDL to Nextflow | -| `/migrate-from-snakemake` | Convert Snakemake to Nextflow | | `/convert-python-script` | Convert Python script to Nextflow | | `/convert-r-script` | Convert R script to Nextflow | | `/convert-jupyter-notebook` | Convert Jupyter notebook to Nextflow | | `/write-nf-test` | Write nf-tests for your pipeline | +Skills exposed by your Co-Scientist deployment also appear in the `/` command palette and in `/help`. + +
+ +## Work with skills + +Co-Scientist can use reusable skills from your current project, your user profile, and the backend skill catalog exposed by your deployment. + +
+**Using skills** + +**Open the command palette**: + +- Type `/` to browse built-in commands and backend skills +- Run `/help` to see the same commands in a text list + +**Use a built-in backend skill**: + +Examples include: + +- `/fix-strict-syntax` +- `/nf-pipeline-structure` +- `/nf-run-history` +- `/nf-data-lineage` +- `/seqera-platform-api` +- `/seqerakit` + +**Create a project skill**: + +Create a `SKILL.md` file in `.agents/skills/` or `.seqera/skills/` and restart `seqera ai`. + +**Install Co-Scientist into coding agents**: + +```bash +seqera skill install +``` +
## Work with data -Seqera AI helps you manage data through Platform data links and access reference datasets. +Co-Scientist helps you manage data through Platform data links and access reference datasets.
**Working with data** @@ -240,7 +424,7 @@ Seqera AI helps you manage data through Platform data links and access reference ## Work with local files -Seqera AI can interact with files in your current working directory. +Co-Scientist can interact with files in your current working directory.
**Work with local files** @@ -270,7 +454,7 @@ Local file operations are controlled by [approval modes](./command-approval.md#a ## Work with nf-core modules -Seqera AI provides access to over 1,000 nf-core modules for common bioinformatics tasks. +Co-Scientist provides access to over 1,000 nf-core modules for common bioinformatics tasks.
**Working with nf-core modules** @@ -308,7 +492,7 @@ The assistant can generate the exact Nextflow command with proper parameters for Use Seqera Platform capabilities to run and manage workflows at scale with AI assistance. -![Use Seqera AI CLI to debug Platform run errors](./_images/sp-run-debug.gif) +![Use Co-Scientist CLI to debug Platform run errors](./_images/sp-run-debug.gif)
**Working with Seqera Platform** @@ -339,7 +523,7 @@ Use Seqera Platform capabilities to run and manage workflows at scale with AI as ## Headless mode -Run Seqera AI in headless mode for scripting and automation. Output is sent to stdout instead of the interactive TUI. +Run Co-Scientist in headless mode for scripting and automation. Output is sent to stdout instead of the interactive TUI.
**Headless mode** @@ -370,7 +554,7 @@ Headless mode is also auto-detected when stdout is piped (e.g., `seqera ai "quer ## Session management -Seqera AI preserves your conversation history across sessions. You can resume previous sessions to continue your work. +Co-Scientist preserves your conversation history across sessions. You can resume previous sessions to continue your work.
**Session management** @@ -397,8 +581,11 @@ seqera ai -s

Learn more

-- [Seqera AI CLI](index.md): Seqera AI CLI overview +- [Co-Scientist CLI](index.md): Co-Scientist CLI overview - [Installation](./installation.mdx): Detailed installation instructions - [Authentication](./authentication.md): Log in, log out, and session management +- [Skills](./skills.md): Discover, create, and install skills +- [Modes](./modes.md): Work in build mode, plan mode, and goal mode - [Command approval](./command-approval.md): Control which commands run automatically +- [Usage and cost](./usage-and-cost.md): Co-Scientist usage in Enterprise deployments - [Troubleshooting](../troubleshooting_and_faqs/seqera-ai.md): Troubleshoot common errors