-
Notifications
You must be signed in to change notification settings - Fork 7
feat: add sqs autoscaling docs #218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
cathylin-wang
wants to merge
4
commits into
main
Choose a base branch
from
cathy/sqs-autoscale-custom-helm-docs
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+201
−0
Draft
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,200 @@ | ||
| --- | ||
| title: "SQS Queue-Depth Autoscaling" | ||
| description: "Scale your workers based on Amazon SQS queue depth using the SQS Exporter addon" | ||
| --- | ||
|
|
||
| # SQS Queue-Depth Autoscaling | ||
|
|
||
| If your application processes messages from Amazon SQS, Porter can automatically scale your worker services based on queue depth. When messages pile up, Porter scales up your workers. When the queue drains, it scales back down. | ||
|
|
||
| This guide uses the [SQS Exporter](https://github.com/porter-dev/sqs-exporter) Porter addon — a lightweight Prometheus exporter that polls SQS queue depth and feeds it to Porter's built-in [metric-based autoscaler](/applications/observability/custom-metrics-and-autoscaling) via KEDA. | ||
|
|
||
| This guide assumes you're running on EKS and uses [IRSA](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) (IAM Roles for Service Accounts) so the exporter and worker pick up short-lived, auto-rotated credentials from a pod-scoped IAM role — no long-lived keys to store or rotate. | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| - An AWS account with an SQS queue | ||
| - A worker service on Porter that processes messages from the queue | ||
| - Permissions to create IAM roles in your AWS account | ||
|
|
||
| ## Step 1: Create an IAM Role for the Exporter | ||
|
|
||
| Porter deploys Helm chart addons with the release name `helmchart`, so the exporter's Kubernetes service account will always be named `helmchart-sqs-exporter`. You'll reference this in the trust policy below. | ||
|
|
||
| **1. Find your OIDC provider URL.** | ||
|
|
||
| In the AWS Console, go to **EKS → your cluster → Overview tab → OpenID Connect provider URL**. Copy the URL and strip the `https://` prefix — this is your `OIDC_PROVIDER`. | ||
|
|
||
| **2. Create an IAM role with a custom trust policy.** | ||
|
|
||
| Go to **IAM → Roles → Create role → Custom trust policy** and paste the following, substituting your values: | ||
|
|
||
| ```json | ||
| { | ||
| "Version": "2012-10-17", | ||
| "Statement": [{ | ||
| "Effect": "Allow", | ||
| "Principal": { | ||
| "Federated": "arn:aws:iam::{ACCOUNT_ID}:oidc-provider/{OIDC_PROVIDER}" | ||
| }, | ||
| "Action": "sts:AssumeRoleWithWebIdentity", | ||
| "Condition": { | ||
| "StringEquals": { | ||
| "{OIDC_PROVIDER}:sub": "system:serviceaccount:{NAMESPACE}:helmchart-sqs-exporter", | ||
| "{OIDC_PROVIDER}:aud": "sts.amazonaws.com" | ||
| } | ||
| } | ||
| }] | ||
| } | ||
| ``` | ||
|
|
||
| - `ACCOUNT_ID`: your AWS account ID | ||
| - `OIDC_PROVIDER`: the value from step 1 (without `https://`) | ||
| - `NAMESPACE`: the namespace of your Porter deployment target (typically `default`) | ||
|
|
||
| **3. Attach a permissions policy** to the role. Create an inline policy: | ||
|
|
||
| ```json | ||
| { | ||
| "Version": "2012-10-17", | ||
| "Statement": [{ | ||
| "Effect": "Allow", | ||
| "Action": "sqs:GetQueueAttributes", | ||
| "Resource": "arn:aws:sqs:{REGION}:{ACCOUNT_ID}:{QUEUE_NAME}" | ||
| }] | ||
| } | ||
| ``` | ||
|
|
||
| Keep the role ARN handy — you'll use it in the next step. | ||
|
|
||
| ## Step 2: Deploy the SQS Exporter | ||
|
|
||
| 1. Navigate to your application in the Porter dashboard | ||
| 2. Go to the **Add-ons** tab | ||
| 3. Click **Add New** and select **Helm Chart** | ||
| 4. Configure the Helm chart: | ||
| - **Repository URL**: `oci://ghcr.io/porter-dev` | ||
| - **Chart Name**: `sqs-exporter` | ||
| - **Chart Version**: `0.1.1` | ||
| 5. In the **Helm Values** section, paste: | ||
|
|
||
| ```yaml | ||
| sqsQueueUrls: "https://sqs.us-east-1.amazonaws.com/123456789/your-queue-name" | ||
|
|
||
| serviceAccount: | ||
| roleArn: "arn:aws:iam::{ACCOUNT_ID}:role/{ROLE_NAME}" | ||
|
|
||
| aws: | ||
| region: "us-east-1" | ||
|
|
||
| pollIntervalSeconds: 10 | ||
| ``` | ||
|
|
||
| <Warning> | ||
| Replace `sqsQueueUrls` with your full SQS queue URL. To monitor multiple queues, provide a comma-separated list of URLs. | ||
| </Warning> | ||
|
|
||
| 6. Click **Deploy** | ||
|
|
||
| Verify the exporter is running by checking the add-on logs — you should see `using ambient AWS credential chain`, followed by the exporter polling your queue with no `AccessDenied` errors. | ||
|
|
||
| ## Step 3: Grant the Worker AWS Access | ||
|
|
||
| The worker pod needs AWS credentials to consume from SQS. First, create an IAM role with these permissions on your queue: | ||
|
|
||
| - `sqs:ReceiveMessage` | ||
| - `sqs:DeleteMessage` | ||
| - `sqs:GetQueueAttributes` | ||
|
|
||
| The role's trust policy must name the worker's service account so IRSA can assume it: | ||
|
|
||
| ```json | ||
| { | ||
| "Version": "2012-10-17", | ||
| "Statement": [{ | ||
| "Effect": "Allow", | ||
| "Principal": { | ||
| "Federated": "arn:aws:iam::{ACCOUNT_ID}:oidc-provider/{OIDC_PROVIDER}" | ||
| }, | ||
| "Action": "sts:AssumeRoleWithWebIdentity", | ||
| "Condition": { | ||
| "StringEquals": { | ||
| "{OIDC_PROVIDER}:sub": "system:serviceaccount:{NAMESPACE}:{APP_NAME}-{SERVICE_NAME}", | ||
| "{OIDC_PROVIDER}:aud": "sts.amazonaws.com" | ||
| } | ||
| } | ||
| }] | ||
| } | ||
| ``` | ||
|
|
||
| Porter names the worker's service account `{APP_NAME}-{SERVICE_NAME}` (e.g. `sqs-poller-poller` for app `sqs-poller` with service `poller`). | ||
|
|
||
| Then attach the role to the worker using one of the following. | ||
|
|
||
| ### Option A: `porter.yaml` | ||
|
|
||
| Declare the [`awsRole` connection](/applications/configuration-as-code/services/connections#aws-role-connection) under your worker service. If your app doesn't have a `porter.yaml`, you can create one at the root of your repo with just the worker service declared — Porter merges it with your dashboard config. | ||
|
|
||
| ```yaml | ||
| services: | ||
| - name: worker | ||
| # ... | ||
| connections: | ||
| - type: awsRole | ||
| role: my-worker-sqs-access | ||
| ``` | ||
|
|
||
| ### Option B: Annotate the service account manually | ||
|
|
||
| If you'd rather not introduce a `porter.yaml`, annotate the worker's service account directly: | ||
|
|
||
| ```bash | ||
| kubectl annotate sa {APP_NAME}-{SERVICE_NAME} -n {NAMESPACE} \ | ||
| eks.amazonaws.com/role-arn=arn:aws:iam::{ACCOUNT_ID}:role/{ROLE_NAME} | ||
| ``` | ||
|
|
||
| Either option results in the service account carrying the `eks.amazonaws.com/role-arn` annotation, which EKS uses to inject IRSA credentials into the pod. | ||
|
|
||
| ### Set the worker's environment variables | ||
|
|
||
| Under the worker service's **Environment** tab, set: | ||
|
|
||
| ``` | ||
| SQS_QUEUE_URL={QUEUE_URL} | ||
| AWS_REGION={AWS_REGION} | ||
| ``` | ||
|
|
||
| ## Step 4: Configure Metric-Based Autoscaling | ||
|
|
||
| 1. Navigate to your application in the Porter dashboard | ||
| 2. Go to the **Services** tab and click on your worker service | ||
| 3. Under **Autoscaling**, select the **Metric-based** tab | ||
| 4. Fill in the following fields: | ||
| - **Min replicas**: `0` (scale to zero when queue is empty) or `1` (always keep one worker warm) | ||
| - **Max replicas**: `10` (adjust based on your workload) | ||
| - **Metric name**: `sqs_messages_visible` | ||
| - **Query**: `avg(sqs_messages_visible{queue_name="your-queue-name"})` | ||
| - **Threshold**: `10` (target number of messages per worker instance) | ||
|
|
||
| <Warning> | ||
| Replace `your-queue-name` with the name portion of your SQS queue URL (e.g., for `https://sqs.us-east-1.amazonaws.com/123456789/order-processing`, use `order-processing`). The `queue_name` label must match exactly or autoscaling won't trigger. | ||
| </Warning> | ||
|
|
||
| ## Scaling Latency | ||
|
|
||
| Scale-up isn't instant — expect on the order of a minute or two between messages arriving in your queue and new worker pods becoming ready. This latency comes from a combination of SQS polling, metrics scraping, and the stabilization period before the autoscaler commits to a scale-up. | ||
|
|
||
| For queue-based workloads where faster scale-up matters, you can shorten the stabilization window via your worker service's **Settings** tab → **Helm overrides**: | ||
|
|
||
| ```yaml | ||
| keda: | ||
| hpa: | ||
| scaleUp: | ||
| stabilizationWindowSeconds: 0 | ||
| policy: | ||
| type: Percent | ||
| value: 100 | ||
| periodSeconds: 15 | ||
| ``` | ||
|
Comment on lines
+183
to
+198
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think the mention of latency is fine, but let's not mentions prometheus, KEDA, HPA. Instead, we should just say the high-level latency they should expect, due to a combination of SQS polling, metrics scraping and stabilization period. |
||
|
|
||
| Node provisioning is often the remaining bottleneck once the autoscaling pipeline is tuned. | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have a preferred route ?
also seems like pod identity is another possible route bc the dashboard already has an IAM Role Connection toggle under the worker's Advanced settings, but that path uses EKS Pod Identity, not IRSA
do we have a prefrence here? the note under
IAM Role Connectionmakes it seem like we prefer pod identity?