Skip to content
13 changes: 11 additions & 2 deletions platform-cloud/docs/compute-envs/gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,9 +125,18 @@ To use [Fusion v2](https://docs.seqera.io/fusion) in your Seqera GKE compute env
- **Enable GKE Metadata Server** in the node group **Security** settings.
1. Allow the IAM service account access to your Google storage bucket:
```shell
gcloud storage buckets add-iam-policy-binding gs://<YOUR-BUCKET> --role roles/storage.objectAdmin --member serviceAccount:<IAM-SERVICE-ACCOUNT>@<GOOGLE-CLOUD-PROJECT>.iam.gserviceaccount.com
gcloud storage buckets add-iam-policy-binding gs://<YOUR-BUCKET> --role roles/<YOUR-ROLE> --member serviceAccount:<IAM-SERVICE-ACCOUNT>@<GOOGLE-CLOUD-PROJECT>.iam.gserviceaccount.com
```
The role must have at least `storage.objects.create`, `storage.objects.get`, and `storage.objects.list` permissions.
- Grant on the **work-dir bucket**:
- `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`)

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects
- `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection)

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`

1. Allow the Kubernetes service account to impersonate the IAM service account:
```shell
gcloud iam service-accounts add-iam-policy-binding <IAM-SERVICE-ACCOUNT>@<GOOGLE-CLOUD-PROJECT>.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:<GOOGLE-CLOUD-PROJECT>.svc.id.goog[<GKE-NAMESPACE>/<GKE-SERVICE-ACCOUNT>]"
Expand Down
78 changes: 74 additions & 4 deletions platform-cloud/docs/compute-envs/google-cloud-batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,17 +55,87 @@ Seqera requires a service account with appropriate permissions to interact with
By default, Google Cloud Batch uses the default Compute Engine service account to submit jobs. This service account is granted the Editor (`roles/Editor`) role. While this service account has the necessary permissions needed by Seqera, this role is not recommended for production environments. Control job access using a custom service account with only the permissions necessary for Seqera to execute Batch jobs instead.
:::

#### Service account permissions
#### Seqera Platform Service Account Permissions (mandatory)
Create a GCP service account for Seqera Platform to use when executing operations against your GCP Project.

[Create a custom service account][create-sa] with at least the following permissions:
##### Core Permissions
- Batch Agent Reporter (`roles/batch.agentReporter`) on the project
- Batch Job Editor (`roles/batch.jobsEditor`) on the project
- Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging)
- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging)
- Service Account User (`roles/iam.serviceAccountUser`)

##### Storage Permissions
TODO: Figure out with @schaluva. Considerations:
1. Minimum permissions for Platform to handle pipeline operations.
2. Studios. Preliminary suggestion:
- `roles/storage.objectUser` for read/write.
- `roles/storage.objectViewer` for read-only.

**Shortcut: project-level Storage Admin**

Granting `roles/storage.admin` at the **project** level covers everything
above and significantly simplifies setup. The tradeoff is a looser security
posture — the Service Account can then touch any bucket in the project,
including buckets unrelated to the pipeline. Confirm this is acceptable
under your organization's security directives before using it.


#### Custom Nextflow Service Account permissions (optional)
Create a GCP service account for the Nextflow pipeline to use rather than the default Compute Engine service account.

##### Core Permissions
- Batch Agent Reporter (`roles/batch.agentReporter`) on the project
- Batch Job Editor (`roles/batch.jobsEditor`) on the project
- Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging)
- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging)
- Service Account User (`roles/iam.serviceAccountUser`)

If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal on each bucket individually. See [Cloud Storage bucket](#cloud-storage-bucket) below.
##### Storage Permissions
The Service Account used by your Nextflow Pipeline requires some combination of the following permissions, depending on the method used to interact with object storage:

| Permission | Allows | GCSFuse | Fusion |
|------------|--------| ------- | ------ |
| `storage.buckets.get` | Resolving bucket metadata at mount | N | Y |
| `storage.objects.list` | Listing work directory contents | Y | Y |
| `storage.objects.get` | Reading inputs and intermediates | Y | Y |
| `storage.objects.create` | Writing outputs (work-dir only) | Y | Y |
| `storage.objects.delete` | Cleanup of work-dir intermediate files & publishDir overwrites | Y | Y |


**GCSFuse-based pipelines**

- Grant on the **work-dir bucket**:
- `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`)

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`


**Fusion-based pipelines**

- Grant on the **work-dir bucket**:
- `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`)

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects
- `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection)

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`


**Shortcut: project-level Storage Admin**

Granting `roles/storage.admin` at the **project** level covers everything
above and significantly simplifies setup. The tradeoff is a looser security
posture — the Service Account can then touch any bucket in the project,
including buckets unrelated to the pipeline. Confirm this is acceptable
under your organization's security directives before using it.


#### User permissions

Expand Down Expand Up @@ -118,7 +188,7 @@ Google Cloud Storage is a type of **object storage**. To access files and store
1. After the bucket is created, you are redirected to the **Bucket details** page.
2. Select **Permissions**, then **Grant access** under **View by principals**.
3. Copy the email address of your service account into **New principals**.
4. Select the **Storage Admin** role, then select **Save**.
4. Select the [required role](#storage-permissions), then select **Save**.

:::tip
You've created a project, enabled the necessary Google APIs, created a bucket, and created a service account JSON key file with the required credentials. You now have what you need to set up a new compute environment in Seqera.
Expand Down
26 changes: 25 additions & 1 deletion platform-cloud/docs/compute-envs/google-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,31 @@ To create and launch pipelines or Studio sessions with this compute environment
- Service Account User (`roles/iam.serviceAccountUser`)
- Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`)

If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal [on each bucket individually](https://docs.seqera.io/platform-cloud/compute-envs/google-cloud-batch#cloud-storage-bucket). For each Google Cloud compute environment created in the Seqera platform, a separate service account is created with the necessary permissions to launch pipelines/studios.
#### Storage Permissions
The Service Account created by Seqera Platform for use by the Google Cloud compute environmet is provisioned with the following roles:

- `roles/storage.objectAdmin` (_on work-dir bucket_)
- `roles/storage.bucketViewer`
- `roles/storage.objectViewer`

If your workflow uses additional GCS buckets beyond the work-dir, you must add additional permissions as follows:

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects
- `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection)

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`
- `roles/storage.bucketViewer`

**Shortcut: project-level Storage Admin**

Granting `roles/storage.admin` at the **project** level covers everything
above and significantly simplifies setup. The tradeoff is a looser security
posture — the Service Account can then touch any bucket in the project,
including buckets unrelated to the pipeline. Confirm this is acceptable
under your organization's security directives before using it.


## Advanced options

Expand Down
13 changes: 11 additions & 2 deletions platform-enterprise_docs/compute-envs/gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,9 +125,18 @@ To use [Fusion v2](https://docs.seqera.io/fusion) in your Seqera GKE compute env
- **Enable GKE Metadata Server** in the node group **Security** settings.
1. Allow the IAM service account access to your Google storage bucket:
```shell
gcloud storage buckets add-iam-policy-binding gs://<YOUR-BUCKET> --role roles/storage.objectAdmin --member serviceAccount:<IAM-SERVICE-ACCOUNT>@<GOOGLE-CLOUD-PROJECT>.iam.gserviceaccount.com
gcloud storage buckets add-iam-policy-binding gs://<YOUR-BUCKET> --role roles/<YOUR-ROLE> --member serviceAccount:<IAM-SERVICE-ACCOUNT>@<GOOGLE-CLOUD-PROJECT>.iam.gserviceaccount.com
```
The role must have at least `storage.objects.create`, `storage.objects.get`, and `storage.objects.list` permissions.
- Grant on the **work-dir bucket**:
- `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`)

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects
- `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection)

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`

1. Allow the Kubernetes service account to impersonate the IAM service account:
```shell
gcloud iam service-accounts add-iam-policy-binding <IAM-SERVICE-ACCOUNT>@<GOOGLE-CLOUD-PROJECT>.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:<GOOGLE-CLOUD-PROJECT>.svc.id.goog[<GKE-NAMESPACE>/<GKE-SERVICE-ACCOUNT>]"
Expand Down
79 changes: 75 additions & 4 deletions platform-enterprise_docs/compute-envs/google-cloud-batch.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,16 +54,87 @@ Seqera requires a service account with appropriate permissions to interact with
By default, Google Cloud Batch uses the default Compute Engine service account to submit jobs. This service account is granted the Editor (`roles/Editor`) role. While this service account has the necessary permissions needed by Seqera, this role is not recommended for production environments. Control job access using a custom service account with only the permissions necessary for Seqera to execute Batch jobs instead.
:::

#### Service account permissions
#### Seqera Platform Service Account Permissions (mandatory)
Create a GCP service account for Seqera Platform to use when executing operations against your GCP Project.

[Create a custom service account][create-sa] with at least the following permissions:
##### Core Permissions
- Batch Agent Reporter (`roles/batch.agentReporter`) on the project
- Batch Job Editor (`roles/batch.jobsEditor`) on the project
- Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging)
- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging)
- Service Account User (`roles/iam.serviceAccountUser`)

##### Storage Permissions
TODO: Figure out with @schaluva. Considerations:
1. Minimum permissions for Platform to handle pipeline operations.
2. Studios. Preliminary suggestion:
- `roles/storage.objectUser` for read/write.
- `roles/storage.objectViewer` for read-only.

**Shortcut: project-level Storage Admin**

Granting `roles/storage.admin` at the **project** level covers everything
above and significantly simplifies setup. The tradeoff is a looser security
posture — the Service Account can then touch any bucket in the project,
including buckets unrelated to the pipeline. Confirm this is acceptable
under your organization's security directives before using it.


#### Custom Nextflow Service Account permissions (optional)
Create a GCP service account for the Nextflow pipeline to use rather than the default Compute Engine service account.

##### Core Permissions
- Batch Agent Reporter (`roles/batch.agentReporter`) on the project
- Batch Job Editor (`roles/batch.jobsEditor`) on the project
- Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging)
- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging)
- Service Account User (`roles/iam.serviceAccountUser`)

If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal on each bucket individually. See [Cloud Storage bucket](#cloud-storage-bucket) below.
##### Storage Permissions
The Service Account used by your Nextflow Pipeline requires some combination of the following permissions, depending on the method used to interact with object storage:

| Permission | Allows | GCSFuse | Fusion |
|------------|--------| ------- | ------ |
| `storage.buckets.get` | Resolving bucket metadata at mount | N | Y |
| `storage.objects.list` | Listing work directory contents | Y | Y |
| `storage.objects.get` | Reading inputs and intermediates | Y | Y |
| `storage.objects.create` | Writing outputs (work-dir only) | Y | Y |
| `storage.objects.delete` | Cleanup of work-dir intermediate files & publishDir overwrites | Y | Y |


**GCSFuse-based pipelines**

- Grant on the **work-dir bucket**:
- `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`)

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`


**Fusion-based pipelines**

- Grant on the **work-dir bucket**:
- `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`)

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects
- `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection)

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`


**Shortcut: project-level Storage Admin**

Granting `roles/storage.admin` at the **project** level covers everything
above and significantly simplifies setup. The tradeoff is a looser security
posture — the Service Account can then touch any bucket in the project,
including buckets unrelated to the pipeline. Confirm this is acceptable
under your organization's security directives before using it.


#### User permissions

Expand Down Expand Up @@ -116,7 +187,7 @@ Google Cloud Storage is a type of **object storage**. To access files and store
1. After the bucket is created, you are redirected to the **Bucket details** page.
2. Select **Permissions**, then **Grant access** under **View by principals**.
3. Copy the email address of your service account into **New principals**.
4. Select the **Storage Admin** role, then select **Save**.
4. Select the [required role](#storage-permissions), then select **Save**.

:::tip
You've created a project, enabled the necessary Google APIs, created a bucket, and created a service account JSON key file with the required credentials. You now have what you need to set up a new compute environment in Seqera.
Expand Down
25 changes: 24 additions & 1 deletion platform-enterprise_docs/compute-envs/google-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,30 @@ To create and launch pipelines or Studio sessions with this compute environment
- Service Account User (`roles/iam.serviceAccountUser`)
- Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`)

If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal [on each bucket individually](https://docs.seqera.io/platform-cloud/compute-envs/google-cloud-batch#cloud-storage-bucket). For each Google Cloud compute environment created in the Seqera platform, a separate service account is created with the necessary permissions to launch pipelines/studios.
#### Storage Permissions
The Service Account created by Seqera Platform for use by the Google Cloud compute environmet is provisioned with the following roles:

- `roles/storage.objectAdmin` (_on work-dir bucket_)
- `roles/storage.bucketViewer`
- `roles/storage.objectViewer`

If your workflow uses additional GCS buckets beyond the work-dir, you must add additional permissions as follows:

- Grant on **every other bucket the pipeline reads from**:
- `roles/storage.objectViewer` — read objects
- `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection)

- Grant on **publishDir bucket if different than work-dir bucket**:
- `roles/storage.objectUser`
- `roles/storage.bucketViewer`

**Shortcut: project-level Storage Admin**

Granting `roles/storage.admin` at the **project** level covers everything
above and significantly simplifies setup. The tradeoff is a looser security
posture — the Service Account can then touch any bucket in the project,
including buckets unrelated to the pipeline. Confirm this is acceptable
under your organization's security directives before using it.

## Advanced options

Expand Down
Loading