From 0b789ed53c348599cb7ce2d2f7b6ab7147d59ef7 Mon Sep 17 00:00:00 2001 From: Graham Wright Date: Tue, 21 Apr 2026 16:12:13 -0400 Subject: [PATCH 1/5] Updated GCP Batch (Cloud) permissions, working with Esha notes. --- .../docs/compute-envs/google-cloud-batch.md | 49 ++++++++++++++++++- 1 file changed, 48 insertions(+), 1 deletion(-) diff --git a/platform-cloud/docs/compute-envs/google-cloud-batch.md b/platform-cloud/docs/compute-envs/google-cloud-batch.md index cebd39709..bec97b93d 100644 --- a/platform-cloud/docs/compute-envs/google-cloud-batch.md +++ b/platform-cloud/docs/compute-envs/google-cloud-batch.md @@ -58,13 +58,60 @@ By default, Google Cloud Batch uses the default Compute Engine service account t [Create a custom service account][create-sa] with at least the following permissions: +##### Core Permissions - Batch Agent Reporter (`roles/batch.agentReporter`) on the project - Batch Job Editor (`roles/batch.jobsEditor`) on the project - Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging) - Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging) - Service Account User (`roles/iam.serviceAccountUser`) -If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal on each bucket individually. See [Cloud Storage bucket](#cloud-storage-bucket) below. +##### Storage Permissions +The Service Account used by your Nextflow Pipeline requires some combination of the following permissions, depending on the method used to interact with object storage: + +| Permission | Allows | GCSFuse | Fusion | +|------------|--------| ------- | ------ | +| `storage.buckets.get` | Resolving bucket metadata at mount | N | Y | +| `storage.objects.list` | Listing work directory contents | Y | Y | +| `storage.objects.get` | Reading inputs and intermediates | Y | Y | +| `storage.objects.create` | Writing outputs (work-dir only) | Y | Y | +| `storage.objects.delete` | Cleanup of work-dir intermediate files & publishDir overwrites | Y | Y | + + +**GCSFuse-based pipelines** + +- Grant on the **work-dir bucket**: + - `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`) + +- Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + +- Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + + +**Fusion-based pipelines** + +- Grant on the **work-dir bucket**: + - `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`) + +- Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + - `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection) + +- Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + + +**Shortcut: project-level Storage Admin** + +Granting `roles/storage.admin` at the **project** level covers everything +above and significantly simplifies setup. The tradeoff is a looser security +posture — the Service Account can then touch any bucket in the project, +including buckets unrelated to the pipeline. Confirm this is acceptable +under your organization's security directives before using it. + #### User permissions From 5c0d92ee8ad7d5905bbafac6aee7e1cac7a81251 Mon Sep 17 00:00:00 2001 From: Graham Wright Date: Wed, 22 Apr 2026 08:02:55 -0400 Subject: [PATCH 2/5] fix: Made GCP permissions more granular - Fixes https://github.com/seqeralabs/docs/issues/1338 - Made permission requirements more granular for GCS Fuse & Fusion runs, and deprioritized suggest to use overly-powerful 'roles/storage.admin' role. --- platform-cloud/docs/compute-envs/gke.md | 14 ++++- .../docs/compute-envs/google-cloud-batch.md | 2 +- .../docs/compute-envs/google-cloud.md | 26 +++++++++- platform-enterprise_docs/compute-envs/gke.md | 14 ++++- .../compute-envs/google-cloud-batch.md | 52 ++++++++++++++++++- .../compute-envs/google-cloud.md | 25 ++++++++- 6 files changed, 124 insertions(+), 9 deletions(-) diff --git a/platform-cloud/docs/compute-envs/gke.md b/platform-cloud/docs/compute-envs/gke.md index c4c2b805e..3b38ce0f9 100644 --- a/platform-cloud/docs/compute-envs/gke.md +++ b/platform-cloud/docs/compute-envs/gke.md @@ -125,9 +125,19 @@ To use [Fusion v2](https://docs.seqera.io/fusion) in your Seqera GKE compute env - **Enable GKE Metadata Server** in the node group **Security** settings. 1. Allow the IAM service account access to your Google storage bucket: ```shell - gcloud storage buckets add-iam-policy-binding gs:// --role roles/storage.objectAdmin --member serviceAccount:@.iam.gserviceaccount.com + gcloud storage buckets add-iam-policy-binding gs:// --role roles/ --member serviceAccount:@.iam.gserviceaccount.com ``` - The role must have at least `storage.objects.create`, `storage.objects.get`, and `storage.objects.list` permissions. + - Grant on the **work-dir bucket**: + - `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`) + + - Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + - `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection) + + - Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + 1. Allow the Kubernetes service account to impersonate the IAM service account: ```shell gcloud iam service-accounts add-iam-policy-binding @.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:.svc.id.goog[/]" diff --git a/platform-cloud/docs/compute-envs/google-cloud-batch.md b/platform-cloud/docs/compute-envs/google-cloud-batch.md index bec97b93d..09b696d04 100644 --- a/platform-cloud/docs/compute-envs/google-cloud-batch.md +++ b/platform-cloud/docs/compute-envs/google-cloud-batch.md @@ -164,7 +164,7 @@ Google Cloud Storage is a type of **object storage**. To access files and store 1. After the bucket is created, you are redirected to the **Bucket details** page. 2. Select **Permissions**, then **Grant access** under **View by principals**. 3. Copy the email address of your service account into **New principals**. -4. Select the **Storage Admin** role, then select **Save**. +4. Select the [required role](#storage-permissions), then select **Save**. :::tip You've created a project, enabled the necessary Google APIs, created a bucket, and created a service account JSON key file with the required credentials. You now have what you need to set up a new compute environment in Seqera. diff --git a/platform-cloud/docs/compute-envs/google-cloud.md b/platform-cloud/docs/compute-envs/google-cloud.md index e9de8cc10..c47ba8250 100644 --- a/platform-cloud/docs/compute-envs/google-cloud.md +++ b/platform-cloud/docs/compute-envs/google-cloud.md @@ -89,7 +89,31 @@ To create and launch pipelines or Studio sessions with this compute environment - Service Account User (`roles/iam.serviceAccountUser`) - Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`) -If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal [on each bucket individually](https://docs.seqera.io/platform-cloud/compute-envs/google-cloud-batch#cloud-storage-bucket). For each Google Cloud compute environment created in the Seqera platform, a separate service account is created with the necessary permissions to launch pipelines/studios. +#### Storage Permissions +The Service Account created by Seqera Platform for use by the Google Cloud compute environmet is provisioned with the following roles: + +- `roles/storage.objectAdmin` (_on work-dir bucket_) +- `roles/storage.bucketViewer` +- `roles/storage.objectViewer` + +If your workflow uses additional GCS buckets beyond the work-dir, you must add additional permissions as follows: + +- Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + - `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection) + +- Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + +**Shortcut: project-level Storage Admin** + +Granting `roles/storage.admin` at the **project** level covers everything +above and significantly simplifies setup. The tradeoff is a looser security +posture — the Service Account can then touch any bucket in the project, +including buckets unrelated to the pipeline. Confirm this is acceptable +under your organization's security directives before using it. + ## Advanced options diff --git a/platform-enterprise_docs/compute-envs/gke.md b/platform-enterprise_docs/compute-envs/gke.md index f52591333..8caaae747 100644 --- a/platform-enterprise_docs/compute-envs/gke.md +++ b/platform-enterprise_docs/compute-envs/gke.md @@ -125,9 +125,19 @@ To use [Fusion v2](https://docs.seqera.io/fusion) in your Seqera GKE compute env - **Enable GKE Metadata Server** in the node group **Security** settings. 1. Allow the IAM service account access to your Google storage bucket: ```shell - gcloud storage buckets add-iam-policy-binding gs:// --role roles/storage.objectAdmin --member serviceAccount:@.iam.gserviceaccount.com + gcloud storage buckets add-iam-policy-binding gs:// --role roles/ --member serviceAccount:@.iam.gserviceaccount.com ``` - The role must have at least `storage.objects.create`, `storage.objects.get`, and `storage.objects.list` permissions. + - Grant on the **work-dir bucket**: + - `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`) + + - Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + - `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection) + + - Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + 1. Allow the Kubernetes service account to impersonate the IAM service account: ```shell gcloud iam service-accounts add-iam-policy-binding @.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:.svc.id.goog[/]" diff --git a/platform-enterprise_docs/compute-envs/google-cloud-batch.md b/platform-enterprise_docs/compute-envs/google-cloud-batch.md index 5fd0676e1..fe5214fd5 100644 --- a/platform-enterprise_docs/compute-envs/google-cloud-batch.md +++ b/platform-enterprise_docs/compute-envs/google-cloud-batch.md @@ -58,12 +58,60 @@ By default, Google Cloud Batch uses the default Compute Engine service account t [Create a custom service account][create-sa] with at least the following permissions: +##### Core Permissions - Batch Agent Reporter (`roles/batch.agentReporter`) on the project - Batch Job Editor (`roles/batch.jobsEditor`) on the project - Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging) +- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging) - Service Account User (`roles/iam.serviceAccountUser`) -If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal on each bucket individually. See [Cloud Storage bucket](#cloud-storage-bucket) below. +##### Storage Permissions +The Service Account used by your Nextflow Pipeline requires some combination of the following permissions, depending on the method used to interact with object storage: + +| Permission | Allows | GCSFuse | Fusion | +|------------|--------| ------- | ------ | +| `storage.buckets.get` | Resolving bucket metadata at mount | N | Y | +| `storage.objects.list` | Listing work directory contents | Y | Y | +| `storage.objects.get` | Reading inputs and intermediates | Y | Y | +| `storage.objects.create` | Writing outputs (work-dir only) | Y | Y | +| `storage.objects.delete` | Cleanup of work-dir intermediate files & publishDir overwrites | Y | Y | + + +**GCSFuse-based pipelines** + +- Grant on the **work-dir bucket**: + - `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`) + +- Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + +- Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + + +**Fusion-based pipelines** + +- Grant on the **work-dir bucket**: + - `roles/storage.objectUser` (preferred; legacy: `roles/storage.objectAdmin`) + +- Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + - `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection) + +- Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + + +**Shortcut: project-level Storage Admin** + +Granting `roles/storage.admin` at the **project** level covers everything +above and significantly simplifies setup. The tradeoff is a looser security +posture — the Service Account can then touch any bucket in the project, +including buckets unrelated to the pipeline. Confirm this is acceptable +under your organization's security directives before using it. + #### User permissions @@ -116,7 +164,7 @@ Google Cloud Storage is a type of **object storage**. To access files and store 1. After the bucket is created, you are redirected to the **Bucket details** page. 2. Select **Permissions**, then **Grant access** under **View by principals**. 3. Copy the email address of your service account into **New principals**. -4. Select the **Storage Admin** role, then select **Save**. +4. Select the [required role](#storage-permissions), then select **Save**. :::tip You've created a project, enabled the necessary Google APIs, created a bucket, and created a service account JSON key file with the required credentials. You now have what you need to set up a new compute environment in Seqera. diff --git a/platform-enterprise_docs/compute-envs/google-cloud.md b/platform-enterprise_docs/compute-envs/google-cloud.md index 09fde95d5..c6ca6f02b 100644 --- a/platform-enterprise_docs/compute-envs/google-cloud.md +++ b/platform-enterprise_docs/compute-envs/google-cloud.md @@ -90,7 +90,30 @@ To create and launch pipelines or Studio sessions with this compute environment - Service Account User (`roles/iam.serviceAccountUser`) - Service Usage Consumer (`roles/serviceusage.serviceUsageConsumer`) -If your Google Cloud project does not require access restrictions on any of its Cloud Storage buckets, you can grant project Storage Admin (`roles/storage.admin`) permissions to your service account to simplify setup. To grant access only to specific buckets, add the service account as a principal [on each bucket individually](https://docs.seqera.io/platform-cloud/compute-envs/google-cloud-batch#cloud-storage-bucket). For each Google Cloud compute environment created in the Seqera platform, a separate service account is created with the necessary permissions to launch pipelines/studios. +#### Storage Permissions +The Service Account created by Seqera Platform for use by the Google Cloud compute environmet is provisioned with the following roles: + +- `roles/storage.objectAdmin` (_on work-dir bucket_) +- `roles/storage.bucketViewer` +- `roles/storage.objectViewer` + +If your workflow uses additional GCS buckets beyond the work-dir, you must add additional permissions as follows: + +- Grant on **every other bucket the pipeline reads from**: + - `roles/storage.objectViewer` — read objects + - `roles/storage.bucketViewer` — read bucket metadata (required by for mount-time bucket inspection) + +- Grant on **publishDir bucket if different than work-dir bucket**: + - `roles/storage.objectUser` + - `roles/storage.bucketViewer` + +**Shortcut: project-level Storage Admin** + +Granting `roles/storage.admin` at the **project** level covers everything +above and significantly simplifies setup. The tradeoff is a looser security +posture — the Service Account can then touch any bucket in the project, +including buckets unrelated to the pipeline. Confirm this is acceptable +under your organization's security directives before using it. ## Advanced options From dd0df79033cc775b29f8caf53b2d2ce940ade995 Mon Sep 17 00:00:00 2001 From: Graham Wright Date: Thu, 23 Apr 2026 21:56:15 -0400 Subject: [PATCH 3/5] fix: Removed 'roles/storage.bucketViewer' from list of required permissions for publication bucket. --- platform-cloud/docs/compute-envs/google-cloud-batch.md | 2 -- platform-enterprise_docs/compute-envs/google-cloud-batch.md | 2 -- 2 files changed, 4 deletions(-) diff --git a/platform-cloud/docs/compute-envs/google-cloud-batch.md b/platform-cloud/docs/compute-envs/google-cloud-batch.md index 09b696d04..123356c50 100644 --- a/platform-cloud/docs/compute-envs/google-cloud-batch.md +++ b/platform-cloud/docs/compute-envs/google-cloud-batch.md @@ -87,7 +87,6 @@ The Service Account used by your Nextflow Pipeline requires some combination of - Grant on **publishDir bucket if different than work-dir bucket**: - `roles/storage.objectUser` - - `roles/storage.bucketViewer` **Fusion-based pipelines** @@ -101,7 +100,6 @@ The Service Account used by your Nextflow Pipeline requires some combination of - Grant on **publishDir bucket if different than work-dir bucket**: - `roles/storage.objectUser` - - `roles/storage.bucketViewer` **Shortcut: project-level Storage Admin** diff --git a/platform-enterprise_docs/compute-envs/google-cloud-batch.md b/platform-enterprise_docs/compute-envs/google-cloud-batch.md index fe5214fd5..898c59466 100644 --- a/platform-enterprise_docs/compute-envs/google-cloud-batch.md +++ b/platform-enterprise_docs/compute-envs/google-cloud-batch.md @@ -87,7 +87,6 @@ The Service Account used by your Nextflow Pipeline requires some combination of - Grant on **publishDir bucket if different than work-dir bucket**: - `roles/storage.objectUser` - - `roles/storage.bucketViewer` **Fusion-based pipelines** @@ -101,7 +100,6 @@ The Service Account used by your Nextflow Pipeline requires some combination of - Grant on **publishDir bucket if different than work-dir bucket**: - `roles/storage.objectUser` - - `roles/storage.bucketViewer` **Shortcut: project-level Storage Admin** From 2b0db03f67eb1368fd8ac299702700991e6eff1d Mon Sep 17 00:00:00 2001 From: Graham Wright Date: Thu, 23 Apr 2026 22:00:53 -0400 Subject: [PATCH 4/5] fix: Removed 'roles/storage.bucketViewer' from list of required permissions for publication bucket (GKE). --- platform-cloud/docs/compute-envs/gke.md | 1 - platform-enterprise_docs/compute-envs/gke.md | 1 - 2 files changed, 2 deletions(-) diff --git a/platform-cloud/docs/compute-envs/gke.md b/platform-cloud/docs/compute-envs/gke.md index 3b38ce0f9..738af88d6 100644 --- a/platform-cloud/docs/compute-envs/gke.md +++ b/platform-cloud/docs/compute-envs/gke.md @@ -136,7 +136,6 @@ To use [Fusion v2](https://docs.seqera.io/fusion) in your Seqera GKE compute env - Grant on **publishDir bucket if different than work-dir bucket**: - `roles/storage.objectUser` - - `roles/storage.bucketViewer` 1. Allow the Kubernetes service account to impersonate the IAM service account: ```shell diff --git a/platform-enterprise_docs/compute-envs/gke.md b/platform-enterprise_docs/compute-envs/gke.md index 8caaae747..a5ce9ccb9 100644 --- a/platform-enterprise_docs/compute-envs/gke.md +++ b/platform-enterprise_docs/compute-envs/gke.md @@ -136,7 +136,6 @@ To use [Fusion v2](https://docs.seqera.io/fusion) in your Seqera GKE compute env - Grant on **publishDir bucket if different than work-dir bucket**: - `roles/storage.objectUser` - - `roles/storage.bucketViewer` 1. Allow the Kubernetes service account to impersonate the IAM service account: ```shell From 30d920cc393af4aec1640791eca88ced54603975 Mon Sep 17 00:00:00 2001 From: Graham Wright Date: Thu, 23 Apr 2026 22:17:39 -0400 Subject: [PATCH 5/5] fix: Split out Platform SA from Nextflow SA for GCP Batch CE (cloud & enterprise). --- .../docs/compute-envs/google-cloud-batch.md | 29 +++++++++++++++++-- .../compute-envs/google-cloud-batch.md | 29 +++++++++++++++++-- 2 files changed, 54 insertions(+), 4 deletions(-) diff --git a/platform-cloud/docs/compute-envs/google-cloud-batch.md b/platform-cloud/docs/compute-envs/google-cloud-batch.md index 123356c50..ffbf4ce4c 100644 --- a/platform-cloud/docs/compute-envs/google-cloud-batch.md +++ b/platform-cloud/docs/compute-envs/google-cloud-batch.md @@ -54,9 +54,34 @@ Seqera requires a service account with appropriate permissions to interact with By default, Google Cloud Batch uses the default Compute Engine service account to submit jobs. This service account is granted the Editor (`roles/Editor`) role. While this service account has the necessary permissions needed by Seqera, this role is not recommended for production environments. Control job access using a custom service account with only the permissions necessary for Seqera to execute Batch jobs instead. ::: -#### Service account permissions +#### Seqera Platform Service Account Permissions (mandatory) +Create a GCP service account for Seqera Platform to use when executing operations against your GCP Project. -[Create a custom service account][create-sa] with at least the following permissions: +##### Core Permissions +- Batch Agent Reporter (`roles/batch.agentReporter`) on the project +- Batch Job Editor (`roles/batch.jobsEditor`) on the project +- Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging) +- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging) +- Service Account User (`roles/iam.serviceAccountUser`) + +##### Storage Permissions +TODO: Figure out with @schaluva. Considerations: +1. Minimum permissions for Platform to handle pipeline operations. +2. Studios. Preliminary suggestion: + - `roles/storage.objectUser` for read/write. + - `roles/storage.objectViewer` for read-only. + +**Shortcut: project-level Storage Admin** + +Granting `roles/storage.admin` at the **project** level covers everything +above and significantly simplifies setup. The tradeoff is a looser security +posture — the Service Account can then touch any bucket in the project, +including buckets unrelated to the pipeline. Confirm this is acceptable +under your organization's security directives before using it. + + +#### Custom Nextflow Service Account permissions (optional) +Create a GCP service account for the Nextflow pipeline to use rather than the default Compute Engine service account. ##### Core Permissions - Batch Agent Reporter (`roles/batch.agentReporter`) on the project diff --git a/platform-enterprise_docs/compute-envs/google-cloud-batch.md b/platform-enterprise_docs/compute-envs/google-cloud-batch.md index 898c59466..23678d3eb 100644 --- a/platform-enterprise_docs/compute-envs/google-cloud-batch.md +++ b/platform-enterprise_docs/compute-envs/google-cloud-batch.md @@ -54,9 +54,34 @@ Seqera requires a service account with appropriate permissions to interact with By default, Google Cloud Batch uses the default Compute Engine service account to submit jobs. This service account is granted the Editor (`roles/Editor`) role. While this service account has the necessary permissions needed by Seqera, this role is not recommended for production environments. Control job access using a custom service account with only the permissions necessary for Seqera to execute Batch jobs instead. ::: -#### Service account permissions +#### Seqera Platform Service Account Permissions (mandatory) +Create a GCP service account for Seqera Platform to use when executing operations against your GCP Project. -[Create a custom service account][create-sa] with at least the following permissions: +##### Core Permissions +- Batch Agent Reporter (`roles/batch.agentReporter`) on the project +- Batch Job Editor (`roles/batch.jobsEditor`) on the project +- Logs Writer (`roles/logging.logWriter`) on the project (to let jobs generate logs in Cloud Logging) +- Logs Viewer (`roles/logging.logViewer`) on the project (to view and retrieve logs from Cloud Logging) +- Service Account User (`roles/iam.serviceAccountUser`) + +##### Storage Permissions +TODO: Figure out with @schaluva. Considerations: +1. Minimum permissions for Platform to handle pipeline operations. +2. Studios. Preliminary suggestion: + - `roles/storage.objectUser` for read/write. + - `roles/storage.objectViewer` for read-only. + +**Shortcut: project-level Storage Admin** + +Granting `roles/storage.admin` at the **project** level covers everything +above and significantly simplifies setup. The tradeoff is a looser security +posture — the Service Account can then touch any bucket in the project, +including buckets unrelated to the pipeline. Confirm this is acceptable +under your organization's security directives before using it. + + +#### Custom Nextflow Service Account permissions (optional) +Create a GCP service account for the Nextflow pipeline to use rather than the default Compute Engine service account. ##### Core Permissions - Batch Agent Reporter (`roles/batch.agentReporter`) on the project