This guide covers security best practices for using Pipe, including how to handle secrets, environment variables, and secure your deployments.
Pipe is designed with security in mind:
- No registry required - Images transfer directly via SSH, never touch public infrastructure
- Input validation - All configuration values are validated against strict patterns
- Shell injection prevention - Dangerous characters are blocked in user inputs
Pass secrets through environment variables to avoid storing them in files or version control:
# Set secrets in your CI/CD pipeline
export DB_PASSWORD="your-secret"
export API_KEY="your-api-key"
# Use inline --env flags
pipe --env DB_PASSWORD="$DB_PASSWORD" --env API_KEY="$API_KEY"In GitHub Actions:
- name: Deploy
env:
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
API_KEY: ${{ secrets.API_KEY }}
run: |
pipe --env DB_PASSWORD="$DB_PASSWORD" --env API_KEY="$API_KEY"For multiple environment variables, use an env file:
# .env.production (DO NOT commit to git!)
DB_HOST=db.example.com
DB_PASSWORD=supersecret
API_KEY=sk-1234567890
REDIS_URL=redis://localhost:6379Deploy with the env file:
pipe --env-file .env.productionOr in pipe.yaml:
envFile: .env.productionHow it works:
- Pipe copies the env file to
~/<filename>on the remote host via SCP - Docker runs the container with
--env-file ~/<filename> - The env file remains on the server for subsequent deployments
You can use both approaches together - inline variables override file values:
# pipe.yaml
envFile: .env.production
env:
LOG_LEVEL: debug # Override for this deployment# CLI takes highest priority
pipe --env LOG_LEVEL=traceAlways use SSH keys instead of passwords:
# Generate a dedicated deployment key
ssh-keygen -t ed25519 -f ~/.ssh/deploy_key -C "deploy@myapp"
# Add to remote server
ssh-copy-id -i ~/.ssh/deploy_key.pub user@server.com
# Use in deployment
pipe --ssh-key ~/.ssh/deploy_keyIn pipe.yaml:
sshKey: ~/.ssh/deploy_keyIf you use a non-standard SSH port:
sshPort: "2222"Never run containers as root in production:
containerUser: "1000:1000" # UID:GIDOr create a user in your Dockerfile:
RUN adduser --disabled-password --gecos '' appuser
USER appuserUse the principle of least privilege:
capDrop:
- ALL # Drop all capabilities first
capAdd:
- NET_BIND_SERVICE # Add only what you needPrevent container modifications:
readOnly: true
tmpfs:
- /tmp
- /var/runHandle zombie processes and signals properly:
init: true# pipe.yaml - Production security hardened
containerUser: "1000:1000"
readOnly: true
init: true
capDrop:
- ALL
capAdd:
- NET_BIND_SERVICE # Only if binding to ports < 1024
tmpfs:
- /tmp
- /var/run
healthCmd: "curl -f http://localhost:3000/health || exit 1"
healthInterval: "30s"
healthRetries: 3Pipe validates all inputs to prevent shell injection attacks:
| Field | Allowed Pattern |
|---|---|
host |
Valid hostname or IP address |
user |
Unix username (lowercase, alphanumeric) |
image |
Lowercase alphanumeric with ., _, /, - |
tag |
Alphanumeric with ., _, - |
containerName |
Alphanumeric with _, ., - |
platform |
Whitelisted values only |
The following characters are blocked in paths and values to prevent shell injection:
; & | $ ` \ \n \r " ' < > ( ) { }
- Path traversal (
..) is blocked in file paths envFileanddockerfilemust be relative paths- Absolute paths are rejected for security-sensitive fields
name: Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup SSH Key
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/deploy_key
chmod 600 ~/.ssh/deploy_key
- name: Deploy
env:
HOST: ${{ vars.DEPLOY_HOST }}
HOST_USER: ${{ vars.DEPLOY_USER }}
DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
run: |
pipe --ssh-key ~/.ssh/deploy_key --env DB_PASSWORD="$DB_PASSWORD"Add to .gitignore:
.env
.env.*
*.pem
*.key
deploy_key
# pipe.yaml - Safe to commit
host: ${DEPLOY_HOST}
user: ${DEPLOY_USER}
sshKey: ${SSH_KEY_PATH}Isolate containers on internal networks:
network: internal-networkOnly expose necessary ports:
containerPort: "3000"
hostPort: "3000" # Consider using a reverse proxyEnsure your application is healthy:
healthCmd: "curl -f http://localhost:3000/health || exit 1"
healthInterval: "30s"
healthTimeout: "10s"
healthRetries: 3
healthStartPeriod: "60s"Prevent resource exhaustion attacks:
cpus: "1" # Limit to 1 CPU
memory: "512m" # Limit to 512MB RAMLimit log file size to prevent disk exhaustion:
logDriver: json-file
logOpts:
max-size: "10m"
max-file: "3"logDriver: noneRemote commands are validated for dangerous patterns. The following are blocked:
rm -rf /mkfsdd if=> /dev/
Use remote commands only for safe post-deployment tasks:
remoteCommands:
- "docker system prune -f --filter 'until=24h'"
- "echo 'Deployed at $(date)' >> /var/log/deploys.log"Before deploying to production:
- SSH key authentication configured (no passwords)
- Secrets passed via environment variables, not config files
-
.envfiles excluded from git - Container runs as non-root user
- Capabilities dropped (at minimum
--cap-drop ALL) - Read-only filesystem enabled (with tmpfs for writeable paths)
- Resource limits set (CPU and memory)
- Health checks configured
- Log rotation configured
- Docker network isolation configured
- Dry-run tested before production deployment
# Always preview first
pipe --dry-run