Skip to content

murdinc/crusher2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

crusher2

A production-grade AWS infrastructure automation and deployment tool for managing EC2 instances, Auto Scaling Groups, and application deployments.

Currently powering multiple high-traffic media websites in the USA and France, handling thousands of deployments and managing infrastructure at scale.

Overview

crusher2 is a battle-tested tool designed to streamline infrastructure management and application deployments on AWS. It provides a unified interface for:

  • Application Deployments - Deploy code changes to running EC2 instances via AWS SSM
  • AMI Creation - Build and configure EC2 instances to create custom AMIs for Auto Scaling Groups
  • Server Initialization - Automate initial server setup and configuration during ASG launches
  • Secrets Management - Securely manage environment variables and application secrets via AWS Secrets Manager
  • CloudFront Cache Invalidation - Flush CDN cache for deployed applications
  • Job-Based Configuration - Modular, reusable configuration jobs for system setup

Key Features

Deployment Management

  • Real-time monitoring via WebSocket-based web interface with live command output
  • Parallel deployment to multiple instances matching a deploy target tag
  • Zero-downtime deployments with automated health checks
  • Deployment history and auditing through AWS SSM integration
  • Run deployments, setup tasks, and updates across your infrastructure

Infrastructure Automation

  • Infrastructure as Code approach with JSON-based job definitions
  • Reproducible builds - create consistent AMIs for Auto Scaling Groups
  • Modular job system - compose complex configurations from reusable components
  • Automated package installation across multiple Linux distributions (apt, yum, dnf)
  • Templated configuration files with secure variable interpolation
  • Support for both local and remote execution contexts

AWS Integration

  • Native AWS SSM Session Manager - secure remote execution without SSH keys or bastion hosts
  • AWS Secrets Manager integration - centralized secrets management
  • CloudFront cache invalidation - automated CDN cache management
  • Tag-based instance discovery - dynamic targeting of deployment groups
  • Multi-region support - manage infrastructure across AWS regions
  • IAM role support - works with both IAM credentials and instance profiles

Web Interface (localhost:3000)

  • Secrets management UI - safely edit and sync environment variables
  • Multi-instance deployment - deploy to entire fleets with one click
  • Real-time progress tracking - WebSocket-powered live updates
  • CloudFront management - invalidate CDN cache for specific paths
  • Environment switching - manage multiple environments from one interface

Technical Highlights

  • Built with Go - Single binary deployment, no runtime dependencies
  • Concurrent execution - Goroutine-based parallel operations for speed
  • WebSocket streaming - Real-time bidirectional communication
  • AWS SDK v2 - Modern AWS API integration
  • Production tested - Handling high-traffic deployments in production environments
  • Cross-platform - Supports Linux (amd64), macOS (Intel & Apple Silicon)
  • Secure by default - ANSI stripping, mutex-protected WebSocket writes, proper error handling

Installing

Build from source (see Building from Source section below)


Quick Start

Running the Web Interface

# Create or edit your environment config
vim ~/crusherenv.json

# Start the web interface
crusher2 edit

# Visit in browser
# http://localhost:3000

Running Jobs Locally

# List available jobs
crusher2 list

# Show job details
crusher2 show [job_name]

# Run a job locally
crusher2 run [job_name] [env_name]

Environment Setup

crusher2 looks for the environment configuration first in the users home directory under the name crusherenv.json. This location is easier for local setups without a jobs folder, or multiple jobs folders.

If one is not found in that folder it looks in the current working directory for a folder named jobs with a file called env.json in it. This location is more ideal for server setups, where the environment setup and jobs folder are programatically setup.

Create a crusherenv.json / env.json in the appropriate location, fill it with the environment secrets, i.e.:

{
    "environments": {
        "init_server": {
            "awsKey": "....",
            "awsSecret": "....",
            "awsRegion": "us-east-1",
            "appSecrets": "secrets_name",
            "deployTarget": "env-name",
            "updateCmd": "sudo sh -c 'cd /home/ec2-user/ && crusher2 run update_jobs env-name'",
            "setupCmd": "sudo sh -c 'cd /home/ec2-user/ && crusher2 run setup_job env-name'",
            "deployCmd": "sudo sh -c 'cd /home/ec2-user/ && crusher2 run deploy_job env-name'",
            "distributions": [
                {
                    "name": "distribution-name",
                    "id": "distribution-id"
                }
            ]
        }
    }
}

Description of environment options:

  • awsKey, awsSecret - key and secret with permissions to AWS Secrets Manager, EC2, and SSM Run Commmand, etc
  • appSecrets - the name of the secrets in AWS Secrets Manager for this environment.
  • deployTarget - crusher2 looks for instances with a matching deployTarget tag when running a setup/deploy from localhost:3000.
  • updateCmd - the command to update jobs, runs at the start of every deploy from localhost:3000
  • setupCmd - the setup command, usually only run on server launch but sometimes it needs to be run during a deploy - runs during a deploy from localhost:3000 when setup is checked
  • deployCmd - the deploy command, the main part of a deploy from localhost:3000
  • distributions - list of cloudfront distributions, used when flushing cache from localhost:3000

crusher2 Commands

crusher2 has two requirements to list jobs / show a job / run a job:

  • It must be run from the parent directory of the jobs folder.
  • There must be a crusher2 environment json file either in the jobs folder or in the users home directory.

If you are just running a deploy the local jobs folder is not required.

Configuration and automation tool

Usage:
  crusher2 [command]

Available Commands:
  edit        Edit and deploy changes to an environment
  help        Help about any command
  list        List all jobs
  run         Run a job with the specified environment variables
  show        Show the contents of a job

Flags:
  -h, --help        help for crusher2
  -p, --test-mode   test mode

Use "crusher2 [command] --help" for more information about a command.

crusher2 has four "main" commands:

  • crusher2 list - this lists all of the jobs configured in the jobs folder
  • crusher2 show [job_name] - this shows all of the combined packages, scripts, configs, and commands that a job will run
  • crusher2 run [job_name] [env_name] - this runs the specified job on the local machine using the settings from the specified env_name (from crusherenv.json or env.json)
  • crusher2 edit - this starts the local web interface (localhost:3000) for editing secrets, running server setup, deploying a release, or invalidating cache.

Use Cases

1. Application Deployments

Deploy application updates to running EC2 instances tagged with a specific deployTarget:

  • Real-time progress monitoring via web interface
  • Parallel execution across multiple instances
  • Automatic job updates before deployment
  • Optional setup command execution
  • Live command output streaming

2. AMI Creation for Auto Scaling Groups

Build custom AMIs with pre-configured software and settings:

  • Launch a base EC2 instance
  • Run crusher2 run init_server [env_name] to configure the instance
  • Create an AMI from the configured instance
  • Use the AMI in your Auto Scaling Group launch template

3. Auto Scaling Group Instance Initialization

Automate initial setup when new ASG instances launch:

  • Configure user data to install crusher2
  • Pull jobs configuration from your repository
  • Run initialization jobs to configure the instance
  • Instance becomes ready to serve traffic

4. CloudFront Cache Management

Invalidate CloudFront distributions after deployments:

  • Select distributions from the web interface
  • Specify paths to invalidate
  • Monitor invalidation progress
  • Poll for completion status

5. Secrets Management

Manage application secrets and environment variables:

  • Edit secrets via the web interface
  • Sync to AWS Secrets Manager
  • Interpolate secrets into configuration files during job execution
  • Keep sensitive data out of your code repository

Web Interface

To launch the deploy interface: crusher2 edit

Then visit: http://localhost:3000/

Available Actions

  • Deploy - Run deployment commands on target instances
  • Setup - Run initial setup on target instances (optional during deploy)
  • Edit Secrets - Manage environment variables and application secrets
  • Invalidate Cache - Flush CloudFront distribution cache

Anatomy of a job

Each directory in the jobs main directory is an individual job. There needs to be a .json file with the same name as the folder in the root of each job folder. The contents of the file can have the following options:

{
    "name" : "job_name",
    "packages": [],
    "requires": [
        "required_job"
    ],
    "configDir": "/path/to/configs/",
    "configInterpolate": true,
    "scriptsDir": "/home/ec2-user/crusher-scripts",
    "scriptsInterpolate": true,
    "scriptsRun": true,
    "preConfigCmd": "",
    "postConfigCmd": "sudo service php-fpm reload; sudo service mediaproxy restart"
} 

Description of job options:

  • name - the name of the job
  • packages - list of packages to install with the native package installer
  • requires - list of other crusher2 jobs to include in this job
  • configDir - path to map the files in the configs folder to
  • configInterpolate - if set to true it interpolates the secret variables (set on localhost:3000) into the file placeholders ${var_name}
  • scriptsDir - path to map the files in the scripts folder to
  • scriptsInterpolate - if set to true it interpolates the secret variables (set on localhost:3000) into the file placeholders ${var_name}
  • scriptsRun - if set to true, the scripts will be given execute permissions and then run after being copied to the scriptsDir
  • preConfigCmd - command to run at the start of the job, before packages, configs and scripts are run
  • postConfigCmd - command to run at the end of the job, after packages, configs and scripts are run

Using this process, many small reusable jobs can be combined to create larger tasks.


Jobs Setup (if you want to edit or create jobs)

Checkout the jobs repo (https://github.com/murdinc/crusher2_jobs or https://github.com/murdinc/salon_jobs) and rename the folder to "jobs"

(alternatively, you can checkout the jobs folder in the root of the source folder for this project if you want to compile it yourself)


Architecture & Design

Job System

Jobs are modular, composable units of configuration that can:

  • Install system packages
  • Transfer and interpolate configuration files
  • Execute shell scripts
  • Depend on other jobs (dependency resolution)
  • Run pre/post configuration commands

Deployment Flow

  1. Discovery - Find EC2 instances by deployTarget tag
  2. Update - Pull latest jobs from git repository
  3. Setup (optional) - Run initial setup commands
  4. Deploy - Execute deployment commands via SSM
  5. Monitor - Stream output in real-time via WebSocket
  6. Verify - Track completion across all instances

Security Model

  • Secrets stored in AWS Secrets Manager (never in git)
  • SSM Session Manager (no SSH keys to manage)
  • Variable interpolation at deployment time
  • Audit trail via AWS CloudTrail integration
  • Principle of least privilege with IAM roles

Technology Stack

  • Language: Go 1.20+
  • Web Framework: Chi router
  • WebSockets: Gorilla WebSocket
  • AWS SDK: AWS SDK for Go v2
  • Template Engine: HashiCorp HIL (interpolation)
  • CLI Framework: Cobra
  • SSM Client: Custom SSM Session Manager implementation

Performance

  • Parallel deployment to 10+ instances simultaneously
  • Sub-second WebSocket latency for real-time updates
  • Efficient binary size (~20MB compiled)
  • Low memory footprint during execution
  • Handles long-running deployments without timeout issues

Building from Source

# Linux (amd64)
env GOOS=linux GOARCH=amd64 go build -o ./builds/linux/crusher2 main.go

# macOS (Apple Silicon)
env GOOS=darwin GOARCH=arm64 go build -o ./builds/osx_m1/crusher2 main.go

# macOS (Intel)
env GOOS=darwin GOARCH=amd64 go build -o ./builds/osx_intel/crusher2 main.go

Contributing

This is a production tool actively maintained and used in production environments. Contributions, issues, and feature requests are welcome.

License

MIT License - See LICENSE file for details

About

A production-grade AWS infrastructure automation and deployment tool for managing EC2 instances, Auto Scaling Groups, and application deployments. Battle-tested at scale, powering high-traffic media websites with real-time WebSocket monitoring, parallel deployments, SSM integration, and modular job-based configuration.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages