Skip to content

Conversation

@acmeguy
Copy link
Contributor

@acmeguy acmeguy commented Jun 10, 2025

Introduces Kustomize patches to tailor HorizontalPodAutoscaler (HPA) and Deployment resource settings (CPU/memory requests and limits) for development and staging environments, overriding the base defaults.

Development Overlay (apps/contextapi/overlays/development):

  • HPA: Patched to minReplicas: 1, maxReplicas: 1.
  • Deployment (node container):
    • Requests: cpu: 100m, memory: 100Mi
    • Limits: cpu: 200m, memory: 200Mi

Staging Overlay (apps/contextapi/overlays/staging):

  • HPA: Patched to minReplicas: 1, maxReplicas: 2.
  • Deployment (node container):
    • Requests: cpu: 150m, memory: 150Mi
    • Limits: cpu: 300m, memory: 300Mi

These changes allow for more appropriate resource allocation and scaling behavior in non-production environments, optimizing resource usage and cost while maintaining production-like configurations for the production overlay (which continues to use the base HPA and Deployment settings unless patched separately).

Introduces Kustomize patches to tailor HorizontalPodAutoscaler (HPA)
and Deployment resource settings (CPU/memory requests and limits)
for development and staging environments, overriding the base defaults.

Development Overlay (`apps/contextapi/overlays/development`):
- HPA: Patched to `minReplicas: 1`, `maxReplicas: 1`.
- Deployment (`node` container):
  - Requests: `cpu: 100m`, `memory: 100Mi`
  - Limits: `cpu: 200m`, `memory: 200Mi`

Staging Overlay (`apps/contextapi/overlays/staging`):
- HPA: Patched to `minReplicas: 1`, `maxReplicas: 2`.
- Deployment (`node` container):
  - Requests: `cpu: 150m`, `memory: 150Mi`
  - Limits: `cpu: 300m`, `memory: 300Mi`

These changes allow for more appropriate resource allocation and scaling
behavior in non-production environments, optimizing resource usage and cost
while maintaining production-like configurations for the production overlay
(which continues to use the base HPA and Deployment settings unless patched
separately).
@gzur gzur self-requested a review June 10, 2025 10:39
Copy link
Contributor

@gzur gzur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Environments

We should be trying this out on dev / staging - seeing as we don't have those, I think we make a point of making this work on the staging cluster, since that is in the works anyway, and having a target like "get the contextapi running on staging" sounds like a super nice concrete milestone.

Template consolidation

The current PR duplicates a lot of YAML across dev/staging environments, with identical ConfigMaps and near-identical Ingress files that only differ in hostname/secret names.
Using Kustomize configMapGenerator / patches / replacements with environment-specific variables would reduce this overhead and create a single source of truth for shared configuration, lessening the chance of configuration drift between environments (for example, the 49-line contextapi-config.yaml could become a 5-line configMapGenerator in each overlay's kustomization.yaml)

Ideally, you do not want to be declaring new resources wholesale in overlays.

Network policy

  • The ingress selector is wrong.
  • Is the contextapi the only in-cluster caller?

I would be hesitant to roll it out to prod as-is without extensive testing.

- from:
- podSelector:
matchLabels:
app: nginx # Placeholder: Label for NGINX ingress pods
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This selector is wrong, should be: app.kubernetes.io/name: rke2-ingress-nginx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants