diff --git a/README.md b/README.md
index 79bd1be..0c81e17 100644
--- a/README.md
+++ b/README.md
@@ -1,50 +1,442 @@
-# EVerest dev environment
+# EVerest Development
-This subproject contains all utility files for setting up your development environment. So far this is the [edm - the Everest Dependency Manager](dependency_manager/README.md) which helps you orchestrating the dependencies between the different everest repositories.
+This repository contains the steps required to build EVerest using either a container-based solution or bare metal.
-You can install [edm](dependency_manager/README.md) very easy using pip.
+- [EVerest Container Based Development](#everest-container-based-development)
+ - [Prerequisites](#prerequisites)
+ - [Quick Start (using VS Code Development - full automation)](#quick-start-using-vs-code-development---full-automation)
+ - [Manual Docker Setup](#manual-docker-setup)
+ - [Available Services and Docker Compose Profiles](#available-services-and-docker-compose-profiles)
+ - [Environment Variables](#environment-variables)
+ - [Working with Multiple Repositories](#working-with-multiple-repositories)
+ - [Shell Completion (Optional)](#shell-completion-optional)
+ - [SIL Simulation](#sil-simulation)
+ - [Building EVerest](#building-everest)
+ - [Available Node-RED UI Simulations](#available-node-red-ui-simulations)
+ - [Troubleshooting](#troubleshooting)
+- [Everest Bare Metal Development](#everest-bare-metal-development)
+ - [Python Prerequisites](#python-prerequisites)
+ - [EDM Prerequisites](#edm-prerequisites)
+ - [Building with Tests](#building-with-tests)
+ - [Cross-Compilation](#cross-compilation)
-All documentation and the issue tracking can be found in our main repository here: https://github.com/EVerest/everest
+## EVerest Container Based Development
-## Easy Dev Environment Setup
+### Prerequisites
-To setup a devcontainer in your workspace you can use the following command to run the `setup_devcontainer.sh` script locally.
+To install the prerequisites, please check your operating system or distribution online documentation:
-### 1. Prerequisites
+- VS Code with Docker extension
+- Docker installed (check documentation here: )
+- Docker compose installed version V2 (not working with V1). Tested with Linux, specifically with Ubuntu 22.04 and 24.04.
-Create a new directory and navigate into it. This directory will be your new workspace or use an existing one.
+### Quick Start (using VS Code Development - full automation)
+
+The easiest way to develop is using VS Code with the development container:
+
+1. Follow the steps below
+2. VS Code will automatically build the container with your repository settings
+3. All development happens inside the container with the correct environment variables
+
+The contents of your folder where the code resides (e.g., `my-workspace`) are mapped inside the container in the folder called `/workspace`.
+You can exit VS Code at any time, re-running it will cause VS Code to ask you again to reopen in container.
+Here are the steps required:
+
+1. **Create a folder for your project**
+ Create a new directory and navigate into it. This directory will be your new workspace or use an existing one.
+
+ ```bash
+ mkdir my-workspace
+ cd my-workspace
+ ```
+
+2. **Install DevContainer template:**
+ You can use the following command to download and install the devcontainer template:
+
+ **One-liner:**
+
+ ```bash
+ curl -s https://raw.githubusercontent.com/EVerest/everest-dev-environment/main/devcontainer/template/setup-container > setup-container && chmod +x setup-container && ./setup-container
+ ```
+
+ **Manual clone (if curl fails):**
+
+ ```bash
+ git clone git@github.com:EVerest/everest-dev-environment.git
+ cp everest-dev-environment/devcontainer/template/setup-container setup-container
+ chmod +x setup-container
+ ./setup-container
+ # you can delete the everest-dev-environment folder, it is not needed anymore
+ rm -rf everest-dev-environment
+ ```
+
+ The script will ask you for:
+ 1. **Workspace directory**: Press Enter to use current directory (recommended)
+ 2. **Version**: Press Enter to use 'main' (recommended)
+ 3. **Continue if directory not empty**: Type 'y' and press Enter (since you downloaded the setup-container script)
+
+3. **Open in VS Code:**
+ Then open the workspace in Visual Studio Code:
+
+ ```bash
+ code .
+ ```
+
+ Or press Ctrl+O to open the current folder in VSCode.
+
+ Choose **Reopen in container** when prompted by VS Code.
+
+### Manual Docker Setup
+
+If you prefer to run the container outside VS Code, the `./devrd` script provides comprehensive control:
+
+```bash
+# Quick start (generate .env and start all services)
+./devrd start
+
+# Step-by-step workflow:
+./devrd build # Build container (generates .env if missing)
+./devrd start # Start all services (generates .env if missing)
+./devrd stop # Stop all services
+./devrd purge # Remove all containers, images, and volumes
+
+# Container access:
+./devrd prompt # Get interactive shell in container
+./devrd exec # Execute single command in container
+
+# Node-RED SIL Simulation:
+./devrd flows # List available simulation flows
+./devrd flow # Switch to specific flow file
+
+# Custom environment configuration:
+./devrd env -w /path/to/workspace # Set workspace directory mapping
+```
+
+#### Available Services and Docker Compose Profiles
+
+Services are organized into logical profiles for easier management:
+
+| Profile | Service | Container Name | URL | Purpose |
+| --------------- | ----------------- | --------------------------------------- | -------------------------- | ------------------------- |
+| `mqtt/ocpp/sil` | **MQTT Server** | `_devcontainer-mqtt-server-1` | localhost:1883 | Basic MQTT broker |
+| `ocpp` | **OCPP DB** | `_devcontainer-ocpp-db-1` | Internal | OCPP database |
+| `ocpp` | **Steve (HTTP)** | `_devcontainer-steve-1` | | OCPP backend management |
+| `sil` | **Node-RED UI** | `_devcontainer-nodered-1` | | SIL simulation interface |
+| `sil` | **MQTT Explorer** | `_devcontainer-mqtt-explorer-1` | | MQTT topic browser |
+| `ocpp/sil` | **Dev Container** | `_devcontainer-devcontainer-1` | Command line | The development container |
+| `ocpp/sil` | **Docker Proxy** | `_devcontainer-docker-proxy-1` | Internal | Secure Docker API access |
+
+**Note:** The `all` profile is a synthetic profile that includes all services. Use `./devrd start all` or `./devrd start` (default) to start all services.
+
+Where `` is the Docker Compose project name (check below).
+
+**Usage Examples:**
+
+```bash
+# Start profiles
+./devrd start # Start all services (generates .env if missing)
+./devrd start all # Start all services (same as above)
+./devrd start sil # Start SIL simulation tools
+./devrd start ocpp # Start OCPP backend
+./devrd start mqtt # Start only MQTT server
+
+# Stop services
+./devrd stop # Stop all services
+./devrd stop all # Stop all services (same as above)
+./devrd stop sil # Stop SIL profile only
+./devrd stop ev-ws # Stop all containers matching pattern 'ev-ws'
+```
+
+The Docker Compose project name determines how containers are named and grouped.
+By default, it uses the **current folder name with _devcontainer suffix** (consistent with VSC behavior), but can be customized:
+
+| Behavior | Description |
+| ------------ | -------------------------------------------------------------------------------------------- |
+| **Default** | Uses current folder name + `_devcontainer` (e.g., `ev-ws_devcontainer` for `/path/to/ev-ws`) |
+| **Override** | Set `DOCKER_COMPOSE_PROJECT_NAME` environment variable |
+| **Example** | `DOCKER_COMPOSE_PROJECT_NAME="my-project" ./devrd start` |
+
+**Container naming pattern:** `{project-name}-{service}-1`
+
+- Default: `ev-ws_devcontainer-nodered-1`, `ev-ws_devcontainer-steve-1`
+- Custom: `my-project-nodered-1`, `my-project-steve-1`
+
+#### Environment Variables
+
+ You can generate an .env file with auto-detected values using this command:
+
+ ```bash
+ ./devrd env # Generate .env file with auto-detected values
+ ```
+
+ This will create a `.devcontainer/.env` file with the following content:
+
+ ```bash
+ # Auto-generated by setup script
+ ORGANIZATION_ARG=EVerest
+ REPOSITORY_HOST=gitlab.com
+ REPOSITORY_USER=git
+ COMMIT_HASH=<..>
+ EVEREST_TOOL_BRANCH=main
+ UID=<..>
+ GID=<..>
+ HOST_WORKSPACE_FOLDER=/home/fmihut/checkout/ev-ws
+ ```
+
+These variables are automatically mapped in the container to the following environment variables:
+
+- `ORGANIZATION_ARG`: Maps to `EVEREST_DEV_TOOL_DEFAULT_GIT_ORGANIZATION`
+- `REPOSITORY_HOST`: Maps to `EVEREST_DEV_TOOL_DEFAULT_GIT_HOST`
+- `REPOSITORY_USER`: Maps to `EVEREST_DEV_TOOL_DEFAULT_GIT_SSH_USER`
+- `EVEREST_TOOL_BRANCH`: Maps to `EVEREST_TOOL_BRANCH`
+- `HOST_WORKSPACE_FOLDER`: The directory mapped to `/workspace` inside the container
+
+Use this mechanism if you have a different organization or git host or user.
+This is useful if you have forked and you are hosting your development outside github.
+
+**Workspace Folder Mapping:**
+
+The `/workspace` directory inside the container can be mapped to any folder on your host system. The workspace folder is determined in the following priority order:
+
+1. **Command line option**: `-w` or `--workspace` flag
+2. **Environment variable**: `HOST_WORKSPACE_FOLDER` environment variable
+3. **`.env` file**: `HOST_WORKSPACE_FOLDER` value in `.devcontainer/.env`
+4. **Current directory**: Falls back to the current working directory
+
+The `.devcontainer` directory (containing `.env` and Docker Compose files) is always located relative to where the `devrd` script is installed. This allows you to:
+
+- Run `devrd` from any directory in your workspace
+- Use a single `devrd` installation to manage multiple workspaces by changing `HOST_WORKSPACE_FOLDER` in the `.env` file
+
+```bash
+# Set workspace mapping via command line
+./devrd env -w /path/to/workspace # Map to any directory
+./devrd start
+
+# Or edit .devcontainer/.env directly and set HOST_WORKSPACE_FOLDER
+# Then run from anywhere:
+cd /path/to/workspace/subfolder
+../devrd start # Works correctly, uses workspace from .env file
+```
+
+#### Working with Multiple Repositories
+
+To work with multiple everest repositories:
+
+1. **Follow the [Quick Start](#quick-start-using-vs-code-development---full-automation) setup** to create your workspace and install the devcontainer template
+2. **Start VS Code** or run the container manually
+3. **Clone additional repositories:**
+
+```bash
+# Follow the Quick Start setup first and
+# Build and start the environment
+./devrd build # generates .env if missing and builds the container
+code . # if you use VSCode
+./devrd start # not using VSCode (generates .env if missing)
+./devrd prompt # not using VSCode
+
+# inside the container
+cd /workspace
+everest clone everest-core # or use the git command to clone
+everest clone everest-framework # or use the git command to clone
+cd everest-core
+# Build the project (see Building EVerest section for details)
+cmake -S . -B build -G Ninja
+ninja -C build install
+# this is building everest-core and it will use the everest-framework cloned locally
+# the rest of the dependencies will be automatically downloaded by edm
+# you can manually clone any of the dependencies locally if you want to
+```
+
+#### Shell Completion (Optional)
+
+Command line completion for the `devrd` script is **enabled by default** in the container. The completion file is automatically sourced when you start a shell session.
+
+If you want to enable completion on your host system (outside the container):
+
+**For Bash:**
+
+```bash
+# Add to your ~/.bashrc
+source .devcontainer/devrd-completion.bash
+```
+
+**For Zsh:**
```bash
-mkdir my-workspace
-cd my-workspace
+# Add to your ~/.zshrc
+autoload -U compinit && compinit
+source .devcontainer/devrd-completion.zsh
```
-### 2. Run the setup script
+**Available completions:**
-Run the following command to setup the devcontainer.
+- **Commands**: `env`, `build`, `start`, `stop`, `prompt`, `purge`, `exec`, `flows`, `flow`
+- **Options**: `-v`, `--version`, `-w`, `--workspace`, `--help`
+- **Node-RED flows**: dynamically detected from container (full file paths)
+- **Directories**: for workspace option
+- **Common commands**: for exec option
+
+**Example usage:**
```bash
-export BRANCH="main" && bash -c "$(curl -s --variable %BRANCH=main --expand-url https://raw.githubusercontent.com/EVerest/everest-dev-environment/{{BRANCH}}/devcontainer/setup-devcontainer.sh)"
+./devrd # Shows all commands
+./devrd start # Shows available profiles to start
```
-The script will ask you for the following information:
-1. Workspace directory: Default is the current directory. You can keep the default by pressing enter.
-2. everest-dev-environment version: Default is 'main'. You can keep the default by pressing enter.
+### SIL Simulation
+
+#### Building EVerest
+
+```bash
+# 1. Start environment (HOST)
+./devrd start
+
+# 2. Build project (CONTAINER)
+./devrd prompt
+cd /workspace
+cmake -B build -S . -GNinja && ninja -C build install/strip
+
+# 3. Switch to simulation (HOST)
+./devrd flow everest-core/config/nodered/config-sil-dc-flow.json
+
+# 4. Open UI
+# Visit: http://localhost:1880/ui
+```
+
+**Note:** You can use `make` instead of `ninja` by removing `-G Ninja`.
+
+#### Available Node-RED UI Simulations
+
+| Flow File | Description |
+| -------------------------------------------------------------------- | -------------------- |
+| `everest-core/config/nodered/config-sil-dc-flow.json` | Single DC charging |
+| `everest-core/config/nodered/config-sil-dc-bpt-flow.json` | DC charging with BPT |
+| `everest-core/config/nodered/config-sil-energy-management-flow.json` | Energy management |
+| `everest-core/config/nodered/config-sil-two-evse-flow.json` | Two EVSE simulation |
+| `everest-core/config/nodered/config-sil-flow.json` | Basic SIL simulation |
+
+#### Troubleshooting
-### 3. Open in VS Code
+**Port conflicts:**
-After the script has finished, open the workspace in Visual Studio Code.
+Each instance uses the same ports (1883, 1880, 4000, etc.). Please note that only one instance can run at a time.
+If another process is using the port you need here is how to detect it and make it available:
```bash
-code .
+sudo lsof -ti:1880 \| xargs sudo kill -9
+./devrd start
```
-VS Code will ask you to reopen the workspace in a container. Click on the button `Reopen in Container`.
+**Regenerate environment configuration:**
-### 4. Getting started
+```bash
+./devrd env # Generate new .env file with auto-detection
+```
+
+**Customize environment variables:**
+
+```bash
+# Use specific branch for everest-dev-environment
+./devrd env -v release/1.0
+```
+
+**Important:** If you manually edit the `.env` file and change `EVEREST_TOOL_BRANCH` or other build arguments, you must rebuild the container for changes to take effect:
+
+```bash
+# Option 1: Force rebuild (recommended)
+./devrd env # Regenerate .env if you edited it manually
+./devrd build # Rebuild with new environment variables
+
+# Option 2: Clean rebuild (if rebuild doesn't work)
+./devrd stop # Stop all containers, images, and volumes
+./devrd purge # Remove all containers, images, and volumes
+./devrd build # Rebuild from scratch
+```
-As your set up dev environment suggests when you open a terminal, you can setup your EVerest workspace by running the following command:
+**Purge and rebuild:**
```bash
-everest clone everest-core
+./devrd purge # Remove all resources for current folder
+./devrd build # Will generate .env if missing
+```
+
+**Volume conflicts:**
+
+Docker volumes are shared. Use `./devrd purge` before switching instances.
+
+**SSH keys:**
+
+Ensure your SSH agent has the necessary keys for all repositories.
+
+**Container naming:**
+
+Docker containers are named based on the workspace directory to avoid conflicts.
+
+**Environment configuration issues:**
+
+If you're having issues with environment variables or configuration, see the [Environment Variables](#environment-variables) section above.
+
+**Container build issues:**
+
+If containers fail to build or start, see the [Manual Docker Setup](#manual-docker-setup) section for basic commands.
+
+**Switching between instances:**
+
+```bash
+# Stop current instance
+./devrd stop
+
+# Purge if switching to different branch/project
+./devrd purge
+
+# Start new instance
+cd ~/different-everest-directory
+./devrd start
+```
+
+## Everest Bare Metal Development
+
+### Python Prerequisites
+
+For development outside containers, install:
+
+```bash
+python3 -m pip install protobuf grpcio-tools nanopb==0.4.8
+```
+
+### EDM Prerequisites
+
+To be able to compile using Forgejo, you need to have edm tool at least with version 0.8.0:
+
+```bash
+edm --version
+edm 0.8.0
+```
+
+### Building with Tests
+
+```bash
+cmake -Bbuild -S. -GNinja -DBUILD_TESTING=ON -DEVEREST_EXCLUDE_MODULES="Linux_Systemd_Rauc"
+ninja -C build
+ninja -C build test
+```
+
+### Cross-Compilation
+
+Install the SDK as provided by Yocto (or similar).
+Activate the environment (typically by sourcing a script).
+
+```bash
+cd {...}/everest-core
+cmake -S . -B build-cross -GNinja
+ -DCMAKE_INSTALL_PREFIX=/var/everest
+ -DEVC_ENABLE_CCACHE=1
+ -DCMAKE_EXPORT_COMPILE_COMMANDS=1
+ -DEVEREST_ENABLE_JS_SUPPORT=OFF
+ -DEVEREST_ENABLE_PY_SUPPORT=OFF
+ -Deverest-cmake_DIR=
+
+DESTDIR=dist ninja -C build-cross install/strip && \
+ rsync -av build-cross/dist/var/everest root@:/var
```
diff --git a/devcontainer/devrd b/devcontainer/devrd
new file mode 100755
index 0000000..01574e7
--- /dev/null
+++ b/devcontainer/devrd
@@ -0,0 +1,691 @@
+#!/bin/bash
+
+set -e
+
+# Default values
+EVEREST_TOOL_BRANCH="main"
+# Script directory - where devrd is located
+SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
+# .devcontainer directory is always relative to the script location
+DEVCONTAINER_DIR="${SCRIPT_DIR}/.devcontainer"
+# .env file is always in the .devcontainer directory (relative to script)
+ENV_FILE="${DEVCONTAINER_DIR}/.env"
+
+# Function to load HOST_WORKSPACE_FOLDER from .env file
+# Usage: load_workspace_from_env [fallback]
+# If fallback is provided and workspace not found in .env, returns fallback
+# If no fallback provided, returns empty string (for use with ${var:-default} syntax)
+load_workspace_from_env() {
+ local fallback="$1"
+ if [ -f "$ENV_FILE" ]; then
+ local workspace=$(grep "^HOST_WORKSPACE_FOLDER=" "$ENV_FILE" 2>/dev/null | cut -d'=' -f2- | sed 's/^[[:space:]]*//;s/[[:space:]]*$//')
+ if [ -n "$workspace" ]; then
+ echo "$workspace"
+ return
+ fi
+ fi
+ # If fallback provided and workspace not found, return fallback
+ if [ -n "$fallback" ]; then
+ echo "$fallback"
+ fi
+}
+
+# HOST_WORKSPACE_FOLDER is the folder that is mapped to /workspace in the container
+# Priority: 1) Command line/env var, 2) .env file, 3) Current directory
+HOST_WORKSPACE_FOLDER="${HOST_WORKSPACE_FOLDER:-$(load_workspace_from_env)}"
+HOST_WORKSPACE_FOLDER="${HOST_WORKSPACE_FOLDER:-$(pwd)}"
+
+# Docker Compose project name (defaults to workspace folder name with _devcontainer suffix, can be overridden)
+# This matches VSC's naming convention: {workspace-folder-name}_devcontainer-{service-name}-1
+# If needed (and not running in VSCode), can be changed by setting the DOCKER_COMPOSE_PROJECT_NAME environment variable.
+DOCKER_COMPOSE_PROJECT_NAME="${DOCKER_COMPOSE_PROJECT_NAME:-$(basename "$HOST_WORKSPACE_FOLDER")_devcontainer}"
+
+
+# Function to detect if running inside container
+is_inside_container() {
+ # Check for /.dockerenv (standard Docker indicator)
+ [ -f /.dockerenv ] && return 0
+ # Check if /workspace exists and is mounted (devcontainer specific)
+ [ -d /workspace ] && [ -f /workspace/.devcontainer/devrd ] && return 0
+ return 1
+}
+
+# Function to show error when command is run from inside container
+show_inside_container_error() {
+ local cmd_name="${1:-this command}"
+ echo "✖ Error: This command cannot be run from inside the container"
+ echo ""
+ echo "You are currently inside the development container."
+ echo "Please run this command from the host system instead:"
+ echo ""
+ echo " 1. Exit the container (type 'exit' or press Ctrl+D)"
+ echo " 2. Run the command from your host terminal:"
+ echo " ./devrd $cmd_name"
+ echo ""
+ exit 1
+}
+
+# Function to run docker compose with static project name
+# Compose files are always relative to the script's .devcontainer directory
+docker_compose() {
+ docker compose -p "$DOCKER_COMPOSE_PROJECT_NAME" \
+ -f "${DEVCONTAINER_DIR}/docker-compose.yml" \
+ -f "${DEVCONTAINER_DIR}/general-devcontainer/docker-compose.devcontainer.yml" "$@"
+}
+
+# Function to validate folder path
+validate_folder() {
+ local folder="$1"
+
+ # Convert relative path to absolute
+ case "$folder" in
+ /*) ;; # Already absolute
+ *) folder="$(cd "$folder" && pwd)" ;; # Convert relative to absolute
+ esac
+
+ # Check if folder exists
+ if [ ! -d "$folder" ]; then
+ echo "Error: Folder '$folder' does not exist"
+ exit 1
+ fi
+
+ # Check if folder is readable
+ if [ ! -r "$folder" ]; then
+ echo "Error: Folder '$folder' is not accessible (permission denied)"
+ exit 1
+ fi
+
+ echo "$folder"
+}
+
+
+# Function to generate .env file
+generate_env() {
+ if is_inside_container; then
+ show_inside_container_error "env"
+ fi
+ # Process command line options
+ if [ -n "$ENV_OPTIONS" ]; then
+ set -- $ENV_OPTIONS
+ while [ $# -gt 0 ]; do
+ case "$1" in
+ -v|--version)
+ EVEREST_TOOL_BRANCH="$2"
+ shift 2
+ ;;
+ -w|--workspace)
+ HOST_WORKSPACE_FOLDER="$2"
+ shift 2
+ ;;
+ *)
+ shift
+ ;;
+ esac
+ done
+ fi
+
+ # Set workspace folder
+ if [ -n "$HOST_WORKSPACE_FOLDER" ]; then
+ HOST_WORKSPACE_FOLDER=$(validate_folder "$HOST_WORKSPACE_FOLDER")
+ else
+ HOST_WORKSPACE_FOLDER="$(pwd)"
+ fi
+
+ # Get commit hash
+ COMMIT_HASH=$(git ls-remote https://github.com/EVerest/everest-dev-environment.git ${EVEREST_TOOL_BRANCH} | cut -f1 2>/dev/null || echo "")
+
+ # Check if we need to update existing file
+ local needs_update=false
+ if [ -f "$ENV_FILE" ] && [ -s "$ENV_FILE" ]; then
+ # File exists, check if we have options that require updates
+ if [ -n "$ENV_OPTIONS" ]; then
+ needs_update=true
+ fi
+ fi
+
+ if [ ! -f "$ENV_FILE" ] || [ ! -s "$ENV_FILE" ] || [ "$needs_update" = true ]; then
+ cat > "$ENV_FILE" << EOF
+# Auto-generated by devrd script
+ORGANIZATION_ARG=EVerest
+REPOSITORY_HOST=github.com
+REPOSITORY_USER=git
+COMMIT_HASH=$COMMIT_HASH
+EVEREST_TOOL_BRANCH=$EVEREST_TOOL_BRANCH
+UID=$(id -u)
+GID=$(id -g)
+HOST_WORKSPACE_FOLDER=$HOST_WORKSPACE_FOLDER
+EOF
+ if [ "$needs_update" = true ]; then
+ echo "Updated .env file"
+ else
+ echo "Generated .env file"
+ fi
+ else
+ echo "Found existing .env file"
+ cat "$ENV_FILE"
+ fi
+}
+
+# Function to build the container
+build_container() {
+ if is_inside_container; then
+ show_inside_container_error "build"
+ fi
+ echo "Building development container..."
+ docker_compose --profile all build
+}
+
+# Function to get actual port mapping from docker compose
+get_port_mapping() {
+ local service_name=$1
+ local internal_port=$2
+
+ # Get the actual port mapping from docker compose
+ local port_mapping=$(docker_compose port $service_name $internal_port 2>/dev/null)
+
+ if [ -n "$port_mapping" ]; then
+ # Extract just the host port (remove the host part)
+ echo "$port_mapping" | sed 's/.*://'
+ else
+ echo ""
+ fi
+}
+
+
+
+# Function to display container links and tips
+display_container_status() {
+ echo ""
+ echo "Container Services Summary:"
+ echo "=============================="
+
+ # Get actual port mappings from docker compose
+ local mqtt_explorer_port=$(get_port_mapping mqtt-explorer 4000)
+ local steve_http_port=$(get_port_mapping steve 8180)
+
+ # Display links with actual ports
+ if [ -n "$mqtt_explorer_port" ]; then
+ echo "MQTT Explorer: http://localhost:$mqtt_explorer_port"
+ else
+ echo "MQTT Explorer: currently not running"
+ fi
+
+ if [ -n "$steve_http_port" ]; then
+ echo "Steve (HTTP): http://localhost:$steve_http_port"
+ else
+ echo "Steve (HTTP): currently not running"
+ fi
+
+ # Check if Node-RED is running
+ if docker_compose ps | grep -q "nodered"; then
+ echo "Node-RED UI: http://localhost:1880/ui"
+ else
+ echo "Node-RED UI: currently not running"
+ fi
+
+ echo ""
+ echo "Tips:"
+ echo " • MQTT Explorer: Browse and debug MQTT topics"
+ echo " • Steve: OCPP backend management interface"
+ echo " • Node-RED: Web-based UI for SIL simulations"
+ echo " • Use './devrd prompt' to access the container shell"
+ echo " • Use './devrd nodered-flows' to see available flows"
+ echo ""
+}
+
+
+# Function to start containers using profiles
+start_compose_profile() {
+ if is_inside_container; then
+ show_inside_container_error "start"
+ fi
+ local profile_or_service="$1"
+
+ if [ -n "$profile_or_service" ]; then
+ echo "Starting containers for profile/service: $profile_or_service..."
+ docker_compose --profile "$profile_or_service" up -d
+ else
+ echo "Starting the development container and all services..."
+ docker_compose --profile all up -d
+ fi
+
+ # Display workspace mapping
+ local workspace_folder=$(load_workspace_from_env "$(pwd)")
+ echo "Workspace mapping: $workspace_folder → /workspace"
+ echo ""
+
+ # Display container links
+ display_container_status
+}
+
+# Function to stop containers using profiles or container name pattern
+stop_compose_profile() {
+ if is_inside_container; then
+ show_inside_container_error "stop"
+ fi
+ local profile_or_pattern="$1"
+
+ if [ -n "$profile_or_pattern" ]; then
+ # Check if it's a valid profile name
+ case "$profile_or_pattern" in
+ mqtt|ocpp|sil|all)
+ echo "Stopping containers for profile: $profile_or_pattern..."
+ docker_compose --profile "$profile_or_pattern" stop
+ ;;
+ *)
+ # Treat as container name pattern
+ echo "Stopping containers matching pattern: $profile_or_pattern..."
+ local containers=$(docker ps --format "{{.Names}}" | grep -E "($profile_or_pattern)" || true)
+ if [ -z "$containers" ]; then
+ echo "No running containers found matching pattern: $profile_or_pattern"
+ return 1
+ fi
+ echo "$containers" | while read container; do
+ echo "Stopping container: $container"
+ docker stop "$container" 2>/dev/null || echo "Failed to stop container: $container"
+ done
+ ;;
+ esac
+ else
+ echo "Stopping the development container and all services..."
+ docker_compose --profile all stop
+ fi
+}
+
+# Function to purge everything
+purge_everything() {
+ if is_inside_container; then
+ show_inside_container_error "purge"
+ fi
+ local purge_pattern="${1:-$(basename "$HOST_WORKSPACE_FOLDER")}"
+ local current_project="$(basename "$HOST_WORKSPACE_FOLDER")"
+ echo "Purging all devcontainer resources for pattern: $purge_pattern..."
+
+ # Only use docker_compose down if purging the current project
+ if [ "$purge_pattern" = "$current_project" ]; then
+ echo "Stopping and removing containers for current project..."
+ docker_compose down -v --remove-orphans
+ else
+ echo "Purging resources for different project pattern: $purge_pattern"
+ echo "Skipping docker-compose cleanup (not current project)"
+ fi
+
+ # Remove all images related to the project
+ echo "Removing devcontainer images..."
+ docker images --format "table {{.Repository}}:{{.Tag}}" | grep -E "($purge_pattern)" | awk '{print $1}' | xargs -r docker rmi -f
+
+ # Remove all volumes related to the project (with force if needed)
+ echo "Removing devcontainer volumes..."
+ docker volume ls --format "{{.Name}}" | grep -E "($purge_pattern)" | while read volume; do
+ echo "Removing volume: $volume"
+ docker volume rm -f "$volume" 2>/dev/null || echo "Volume $volume could not be removed (may be in use)"
+ done
+
+ # Ask user if they want to purge CPM cache volume
+ echo ""
+ echo "CPM source cache volume (everest-cpm-source-cache) is shared across all workspaces."
+ read -p "Do you want to purge the CPM cache volume as well? [y/N]: " purge_cache
+ purge_cache="${purge_cache:-N}"
+ if [[ "$purge_cache" =~ ^[Yy]$ ]]; then
+ echo "Removing CPM cache volume..."
+ if docker volume rm everest-cpm-source-cache 2>/dev/null; then
+ echo "✔ CPM cache volume removed"
+ else
+ echo "⚠ CPM cache volume could not be removed (may be in use or not exist)"
+ fi
+ else
+ echo "Keeping CPM cache volume (will be reused for faster builds)"
+ fi
+
+ # Remove any dangling images and containers
+ echo ""
+ echo "Cleaning up dangling resources..."
+ docker system prune -f
+
+ echo ""
+ echo "✔ Purge complete! All devcontainer resources have been removed."
+}
+
+# Function to check if SSH agent is running
+check_ssh_agent() {
+ if [ -z "$SSH_AUTH_SOCK" ] || ! ssh-add -l >/dev/null 2>&1; then
+ echo "Error: SSH agent is not running or no keys are loaded."
+ echo "Please start the SSH agent and add your keys:"
+ echo " eval \$(ssh-agent)"
+ echo " ssh-add ~/.ssh/id_rsa # or your private key"
+ echo "Or if you're using a different key:"
+ echo " ssh-add ~/.ssh/your_private_key"
+ exit 1
+ fi
+}
+
+# Function to execute a command in the container
+exec_devcontainer() {
+ if is_inside_container; then
+ echo "✖ You're already inside the container."
+ echo ""
+ echo "To run a command, just execute it directly:"
+ if [ $# -gt 0 ]; then
+ echo " $@"
+ else
+ echo " "
+ fi
+ exit 1
+ fi
+ echo "Checking if development container is running..."
+
+ # Check if the devcontainer service is running
+ if ! docker_compose ps devcontainer | grep -q "Up"; then
+ echo "Error: Development container is not running."
+ echo "Please start the container first with: ./devrd start"
+ echo "Or build and start with: ./devrd build && ./devrd start"
+ exit 1
+ fi
+
+ echo "Executing command in development container..."
+ run_in_devcontainer "$@"
+}
+
+# Function to get a shell prompt in the container
+prompt_devcontainer() {
+ if is_inside_container; then
+ echo "✖ You're already inside the container shell."
+ exit 1
+ fi
+ echo "Starting shell in development container..."
+ exec_devcontainer /bin/bash
+}
+
+# Helper function to check if Node-RED is running and get the URL
+# Sets nodered_url variable and returns 0 if running, 1 if not
+check_nodered_running() {
+ if is_inside_container; then
+ nodered_url="http://nodered:1880"
+ curl -s "$nodered_url/flows" >/dev/null 2>&1 && return 0
+ else
+ nodered_url="http://localhost:1880"
+ docker_compose ps | grep -q "nodered" && return 0
+ fi
+ return 1
+}
+
+# Helper function to execute a command in the container
+# Usage: run_in_devcontainer [--no-tty] [args...]
+# Executes directly if inside container, via docker_compose exec if on host
+# No error checking - assumes container is running when called from host
+# Use --no-tty for non-interactive commands that need output capture
+run_in_devcontainer() {
+ local no_tty=false
+ if [ "$1" = "--no-tty" ]; then
+ no_tty=true
+ shift
+ fi
+
+ if is_inside_container; then
+ "$@"
+ else
+ if [ "$no_tty" = true ]; then
+ docker_compose exec -T devcontainer "$@"
+ else
+ docker_compose exec devcontainer "$@"
+ fi
+ fi
+}
+
+# Function to list available flows
+list_nodered_flows() {
+ echo ""
+ echo "Available Node-RED Flows:"
+ echo "============================="
+
+ # Check if Node-RED is running
+ if ! check_nodered_running; then
+ echo "✖ Node-RED container is not running"
+ echo "Please start with './devrd start' first"
+ return 1
+ fi
+
+ # Find all flow files in the workspace
+ local flows
+ if is_inside_container; then
+ flows=$(find /workspace -name "*-flow.json" -type f 2>/dev/null | sort)
+ else
+ flows=$(docker_compose exec -T devcontainer find /workspace -name "*-flow.json" -type f 2>/dev/null | sort)
+ fi
+
+ if [ -z "$flows" ]; then
+ echo "No flow files found in workspace"
+ echo ""
+ echo "Expected pattern: *-flow.json"
+ echo "Search location: /workspace"
+ return 1
+ fi
+
+ echo "Found $(echo "$flows" | wc -l) flow file(s):"
+ echo ""
+ for flow in $flows; do
+ # Remove /workspace/ prefix to get relative path from workspace root
+ local relative_path=$(echo "$flow" | sed 's|^/workspace/||')
+ echo " Path: $relative_path"
+ done
+ echo ""
+ echo "Usage: ./devrd flow "
+ echo "Example: ./devrd flow everest-core/config/nodered/config-sil-dc-flow.json"
+ echo ""
+}
+
+# Function to switch flow using REST API
+switch_nodered_flow() {
+ local flow_path="$1"
+
+ if [ -z "$flow_path" ]; then
+ echo "Error: Please specify a flow file path"
+ echo ""
+ echo "Available flows:"
+ list_nodered_flows
+ return 1
+ fi
+
+ # Check if Node-RED is running
+ if ! check_nodered_running; then
+ echo "✖ Node-RED container is not running"
+ echo "Please start with './devrd start' first"
+ return 1
+ fi
+
+ # Construct full path in container
+ local full_path="/workspace/$flow_path"
+
+ # Check if file exists and is readable, then copy to temp file
+ if ! run_in_devcontainer --no-tty test -r "$full_path"; then
+ echo "✖ Flow file not found or not readable: $flow_path"
+ echo ""
+ echo "Available flows:"
+ list_nodered_flows
+ return 1
+ fi
+ # Copy flow to temporary file
+ run_in_devcontainer --no-tty cat "$full_path" > /tmp/flows.json
+
+ echo "Switching Node-RED to flow: $(basename "$flow_path")"
+ echo "Source: $flow_path"
+
+ # Process environment variables in the flow JSON
+ # Replace "broker": "localhost" with "broker": "mqtt-server"
+ sed -i 's/"broker": "localhost"/"broker": "mqtt-server"/g' /tmp/flows.json
+
+ # Deploy flow via Node-RED REST API
+ echo "Deploying flow via Node-RED API..."
+ local response=$(curl -s -w "%{http_code}" -X POST "$nodered_url/flows" \
+ -H "Content-Type: application/json" \
+ -d @/tmp/flows.json)
+
+ local http_code="${response: -3}"
+
+ if [ "$http_code" = "200" ] || [ "$http_code" = "204" ]; then
+ echo "✔ Node-RED flow deployed successfully via API!"
+ if is_inside_container; then
+ echo "Access at: http://nodered:1880/ui (from container) or http://localhost:1880/ui (from host)"
+ else
+ echo "Access at: http://localhost:1880/ui"
+ fi
+ else
+ echo "✖ Failed to deploy flow via API (HTTP $http_code)"
+ echo "Response: ${response%???}"
+ return 1
+ fi
+
+ # Clean up temporary file
+ rm -f /tmp/flows.json
+}
+
+
+# Function to display help
+show_help() {
+ echo "Usage: $0 [COMMAND] [OPTIONS]"
+ echo ""
+ echo "Commands:"
+ echo " env Generate .env file with repository information (default)"
+ echo " build Build the development container"
+ echo " start [profile] Start containers (profiles: mqtt, ocpp, sil, all)"
+ echo " stop [profile|pattern] Stop containers by profile (mqtt, ocpp, sil, all) or container name pattern"
+ echo " purge [pattern] Remove all devcontainer resources (containers, images, volumes)"
+ echo " Optional pattern to match (default: current folder name)"
+ echo " exec Execute a command in the development container (requires the container to be running)"
+ echo " prompt Get a shell prompt in the development container (requires the container to be running)"
+ echo " flows List available flows"
+ echo " flow Switch to specific flow file"
+ echo ""
+ echo "Options (for env command only):"
+ echo " -v, --version VERSION Everest tool branch (default: $EVEREST_TOOL_BRANCH, preserves existing if not specified)"
+ echo " -w, --workspace DIR Workspace directory to map to /workspace in container (default: current directory)"
+ echo " --help Display this help message"
+ echo ""
+ echo "Examples:"
+ echo " $0 env # Generate .env file with repository information"
+ echo " $0 build # Build container"
+ echo " $0 start # Start all containers"
+ echo " $0 start sil # Start SIL simulation tools (Node-RED, MQTT Explorer)"
+ echo " $0 start ocpp # Start OCPP services (Steve, OCPP DB, MQTT)"
+ echo " $0 start mqtt # Start only MQTT server"
+ echo " $0 stop sil # Stop SIL simulation tools"
+ echo " $0 stop ev-ws # Stop all containers matching pattern 'ev-ws'"
+ echo " $0 purge # Remove all devcontainer resources for current folder"
+ echo " $0 purge my-project # Remove all devcontainer resources matching 'my-project'"
+ echo " $0 exec ls -la # Execute command in container"
+ echo " $0 prompt # Get shell prompt in container"
+ echo " $0 flows # List available flows"
+ echo " $0 flow # Switch to specific Node-RED flow file"
+
+ echo " $0 -w ~/Documents # Map Documents folder to /workspace"
+ echo " $0 --workspace /opt/tools # Map tools folder to /workspace"
+ exit 0
+}
+
+# Parse command line arguments
+COMMAND="env"
+ENV_OPTIONS=""
+
+# First pass: collect all options
+while [ $# -gt 0 ]; do
+ case $1 in
+ -v|--version|-w|--workspace)
+ # Store env-specific options for later use
+ ENV_OPTIONS="$ENV_OPTIONS $1 $2"
+ shift 2
+ ;;
+ --help)
+ show_help
+ ;;
+ exec)
+ COMMAND="$1"
+ shift
+ # For exec, pass all remaining arguments to the exec function
+ break
+ ;;
+ env|build|prompt|flows)
+ COMMAND="$1"
+ shift
+ # Don't break here, continue to collect more options
+ ;;
+ flow)
+ COMMAND="$1"
+ shift
+ # For flow, pass any remaining arguments as flow path
+ break
+ ;;
+ purge)
+ COMMAND="$1"
+ shift
+ # For purge, pass any remaining arguments as pattern
+ break
+ ;;
+ start|stop)
+ COMMAND="$1"
+ shift
+ # For start/stop, pass any remaining arguments as container name
+ break
+ ;;
+ *)
+ echo "Unknown option: $1"
+ show_help
+ ;;
+ esac
+done
+
+# Execute the command
+case $COMMAND in
+ env)
+ # Check SSH agent for Git operations
+ check_ssh_agent
+ generate_env
+ ;;
+ build)
+ # Only generate env if .env file doesn't exist or is empty
+ if [ ! -f "$ENV_FILE" ] || [ ! -s "$ENV_FILE" ]; then
+ # Check SSH agent for Git operations
+ check_ssh_agent
+ generate_env
+ fi
+ build_container
+ ;;
+ start)
+ # Only generate env if .env file doesn't exist or is empty
+ if [ ! -f "$ENV_FILE" ] || [ ! -s "$ENV_FILE" ]; then
+ # Check SSH agent for Git operations
+ check_ssh_agent
+ generate_env
+ fi
+ start_compose_profile "$@"
+ ;;
+ stop)
+ stop_compose_profile "$@"
+ ;;
+ purge)
+ purge_everything "$@"
+ ;;
+ exec)
+ if [ $# -eq 0 ]; then
+ echo "Error: exec command requires arguments"
+ show_help
+ fi
+ exec_devcontainer "$@"
+ ;;
+ prompt)
+ prompt_devcontainer
+ ;;
+ flows)
+ list_nodered_flows
+ ;;
+ flow)
+ if [ $# -eq 0 ]; then
+ echo "Error: flow command requires a flow file path"
+ show_help
+ fi
+ switch_nodered_flow "$1"
+ ;;
+ *)
+ echo "Unknown command: $COMMAND"
+ show_help
+ ;;
+esac
diff --git a/devcontainer/setup-container b/devcontainer/setup-container
new file mode 100755
index 0000000..951c777
--- /dev/null
+++ b/devcontainer/setup-container
@@ -0,0 +1,141 @@
+#!/bin/bash
+
+set -e
+
+# Function to install devcontainer template
+install_devcontainer() {
+ echo "🚀 Installing EVerest DevContainer Template"
+ echo "=========================================="
+
+ read -p "Enter the workspace directory (default is the current directory): " WORKSPACE_DIR
+ if [ -z "$WORKSPACE_DIR" ]; then
+ WORKSPACE_DIR="./"
+ fi
+ WORKSPACE_DIR=$(realpath -m "$WORKSPACE_DIR")
+
+ read -p "Enter the version of the everest-dev-environment (default is 'main'): " VERSION
+ if [ -z "$VERSION" ]; then
+ VERSION="main"
+ fi
+
+ echo "Create the workspace directory '$WORKSPACE_DIR' if it does not exist"
+ mkdir -p $WORKSPACE_DIR
+
+ # Check if workspace directory has files other than everest-dev-environment
+ WORKSPACE_CONTENTS=$(ls -A $WORKSPACE_DIR 2>/dev/null | grep -v "^everest-dev-environment$" || true)
+ if [ -n "$WORKSPACE_CONTENTS" ]; then
+ # The workspace directory is not empty (excluding everest-dev-environment), warning do you want to continue?
+ read -p "The workspace directory is not empty, do you want to continue? (y/N): " -r
+ case "$REPLY" in
+ [Nn]|"")
+ echo "Exiting.."
+ exit 1
+ ;;
+ [Yy])
+ ;;
+ *)
+ echo "Invalid input. Exiting.."
+ exit 1
+ ;;
+ esac
+ fi
+
+ # Check if everest-dev-environment exists locally
+ if [ -d "everest-dev-environment" ]; then
+ echo "Found local everest-dev-environment directory, using it instead of cloning..."
+ SOURCE_DIR="everest-dev-environment"
+ else
+ echo "No local everest-dev-environment found, cloning from GitHub..."
+ TMP_DIR=$(mktemp --directory)
+ git clone --quiet --depth 1 --single-branch --branch "$VERSION" https://github.com/EVerest/everest-dev-environment.git "$TMP_DIR"
+ SOURCE_DIR="$TMP_DIR"
+ fi
+
+ echo "Copy the template devcontainer configuration files to the workspace directory"
+ cp -n -r $SOURCE_DIR/devcontainer/template/. $WORKSPACE_DIR/
+
+ # Ensure devrd script is executable
+ if [ -f "$WORKSPACE_DIR/devrd" ]; then
+ chmod +x "$WORKSPACE_DIR/devrd"
+ echo "✓ DevRD script installed and made executable"
+ else
+ echo "✖ Warning: DevRD script not found after installation"
+ fi
+
+ # Only remove temporary directory if we cloned it
+ if [ "$SOURCE_DIR" != "everest-dev-environment" ]; then
+ echo "Remove the temporary everest-dev-environment repository"
+ rm -rf "$SOURCE_DIR"
+ fi
+
+ echo "DevContainer template installation complete!"
+ echo "Files installed to: $WORKSPACE_DIR"
+ echo ""
+ echo "Installed components:"
+ echo " ✓ Dev Yard script (./devrd)"
+ echo " ✓ DevContainer configuration (.devcontainer/)"
+ echo " ✓ Shell completion (devrd-completion.bash/.zsh)"
+ echo ""
+ echo "Next steps:"
+ echo " cd $WORKSPACE_DIR"
+ echo " ./devrd env"
+ echo " ./devrd build"
+ echo " ./devrd start"
+ echo " ./devrd prompt"
+ echo ""
+ echo "Optional - Enable command completion:"
+ echo " Add to your shell configuration file:"
+ echo " • For bash: Add 'source .devcontainer/devrd-completion.bash' to ~/.bashrc"
+ echo " • For zsh: Add 'source .devcontainer/devrd-completion.zsh' to ~/.zshrc"
+ echo " • For zsh: Also ensure completion is enabled: 'autoload -U compinit && compinit'"
+}
+
+# Function to display help
+show_help() {
+ echo "Usage: $0 [OPTIONS]"
+ echo ""
+ echo "This script installs the EVerest DevContainer template to a workspace."
+ echo ""
+ echo "Installed components:"
+ echo " • DevRD script (./devrd) - Main development environment manager"
+ echo " • DevContainer configuration (.devcontainer/) - VS Code container config"
+ echo " • Shell completion (devrd-completion.bash/.zsh) - Command completion for bash/zsh"
+ echo ""
+ echo "After installation:"
+ echo " 1. cd to your workspace directory"
+ echo " 2. Run './devrd env' to generate .env file"
+ echo " 3. Run './devrd build' to build containers"
+ echo " 4. Run './devrd start' to start services"
+ echo " 5. Run './devrd prompt' to get shell access"
+ echo ""
+ echo "Optional - Enable command completion:"
+ echo " Add to your shell configuration file:"
+ echo " • For bash: Add 'source .devcontainer/devrd-completion.bash' to ~/.bashrc"
+ echo " • For zsh: Add 'source .devcontainer/devrd-completion.zsh' to ~/.zshrc"
+ echo " • For zsh: Also ensure completion is enabled: 'autoload -U compinit && compinit'"
+ echo ""
+ echo "Options:"
+ echo " --help Display this help message"
+ echo ""
+ echo "Examples:"
+ echo " $0 # Install DevContainer template to workspace"
+ echo ""
+ echo "After installation, use './devrd' to manage the development environment."
+ exit 0
+}
+
+# Parse command line arguments
+while [ $# -gt 0 ]; do
+ case $1 in
+ --help)
+ show_help
+ ;;
+ *)
+ echo "Unknown option: $1"
+ show_help
+ ;;
+ esac
+done
+
+# Execute the installation
+install_devcontainer
\ No newline at end of file
diff --git a/devcontainer/setup-devcontainer.sh b/devcontainer/setup-devcontainer.sh
deleted file mode 100755
index b8f1188..0000000
--- a/devcontainer/setup-devcontainer.sh
+++ /dev/null
@@ -1,39 +0,0 @@
-#!/bin/bash
-
-set -e
-
-read -p "Enter the workspace directory (default is the current directory): " WORKSPACE_DIR
-if [ -z "$WORKSPACE_DIR" ]; then
- WORKSPACE_DIR="./"
-fi
-WORKSPACE_DIR=$(realpath -m "$WORKSPACE_DIR")
-
-read -p "Enter the version of the everest-dev-environment (default is 'main'): " VERSION
-if [ -z "$VERSION" ]; then
- VERSION="main"
-fi
-
-echo "Create the workspace directory '$WORKSPACE_DIR' if it does not exist"
-mkdir -p $WORKSPACE_DIR
-
-if [ "$(ls -A $WORKSPACE_DIR)" ]; then
- # The workspace directory is not empty, warning do you want to continue?
- read -p "The workspace directory is not empty, do you want to continue? (y/N): " -r
- if [[ $REPLY =~ ^[Nn]$ || $REPLY = "" ]]; then
- echo "Exiting.."
- exit 1
- elif [[ ! $REPLY =~ ^[Yy]$ ]]; then
- echo "Invalid input. Exiting.."
- exit 1
- fi
-fi
-
-TMP_DIR=$(mktemp --directory)
-echo "Clone the everest-dev-environment repository to the workspace directory with the version $VERSION, temporarily.."
-git clone --quiet --depth 1 --single-branch --branch "$VERSION" https://github.com/EVerest/everest-dev-environment.git "$TMP_DIR"
-
-echo "Copy the template devcontainer configuration files to the workspace directory"
-cp -n -r $TMP_DIR/devcontainer/template/. $WORKSPACE_DIR/
-
-echo "Remove the everest-dev-environment repository"
-rm -rf "$TMP_DIR"
diff --git a/devcontainer/template/.devcontainer/devrd-completion.bash b/devcontainer/template/.devcontainer/devrd-completion.bash
new file mode 100755
index 0000000..92d9202
--- /dev/null
+++ b/devcontainer/template/.devcontainer/devrd-completion.bash
@@ -0,0 +1,96 @@
+#!/bin/bash
+
+# Bash completion for devrd script
+# Source this file or add to your .bashrc to enable completion
+
+_devrd_completion() {
+ local cur prev opts cmds
+ COMPREPLY=()
+ cur="${COMP_WORDS[COMP_CWORD]}"
+ prev="${COMP_WORDS[COMP_CWORD-1]}"
+
+ # Available commands
+ cmds="install env build start stop prompt purge exec flows flow"
+
+ # Available options
+ opts="-v --version -w --workspace --help"
+
+ # Function to get available Node-RED flows dynamically
+ _get_nodered_flows() {
+ # Get the current project name (same logic as devrd script)
+ local project_name="${DOCKER_COMPOSE_PROJECT_NAME:-$(basename "$(pwd)")_devcontainer}"
+
+ # Check if we're in the right directory and container is running
+ if [ -f "devrd" ] && docker compose -p "$project_name" -f .devcontainer/docker-compose.yml -f .devcontainer/general-devcontainer/docker-compose.devcontainer.yml ps devcontainer | grep -q "Up"; then
+ # Get flows from the container and return full paths (relative to workspace)
+ docker compose -p "$project_name" -f .devcontainer/docker-compose.yml -f .devcontainer/general-devcontainer/docker-compose.devcontainer.yml exec -T devcontainer find /workspace -name "*-flow.json" -type f 2>/dev/null | sed 's|/workspace/||' | sort
+ else
+ # Fallback to common flow file paths
+ echo "everest-core/config/nodered/config-sil-dc-flow.json"
+ echo "everest-core/config/nodered/config-sil-dc-bpt-flow.json"
+ echo "everest-core/config/nodered/config-sil-energy-management-flow.json"
+ echo "everest-core/config/nodered/config-sil-two-evse-flow.json"
+ echo "everest-core/config/nodered/config-sil-flow.json"
+ fi
+ }
+
+ # Function to get available container names
+_get_container_names() {
+ echo "mqtt ocpp sil"
+}
+
+ # If the previous word is an option that takes an argument, complete based on the option
+ case "$prev" in
+
+ -v|--version)
+ # Complete with common version patterns
+ COMPREPLY=( $(compgen -W "main master develop release/1.0 release/1.1" -- "$cur") )
+ return 0
+ ;;
+ -w|--workspace)
+ # Complete directories
+ COMPREPLY=( $(compgen -d -- "$cur") )
+ return 0
+ ;;
+ flow)
+ # Complete with available flow file paths dynamically
+ local flows
+ flows=$(_get_nodered_flows)
+ COMPREPLY=( $(compgen -W "$flows" -- "$cur") )
+ return 0
+ ;;
+ start|stop)
+ # Complete with available container names
+ local containers
+ containers=$(_get_container_names)
+ COMPREPLY=( $(compgen -W "$containers" -- "$cur") )
+ return 0
+ ;;
+ exec)
+ # For exec command, complete with common commands
+ COMPREPLY=( $(compgen -W "ls pwd cd cmake ninja make" -- "$cur") )
+ return 0
+ ;;
+ esac
+
+ # If we're completing the first word (command), show commands
+ if [ $COMP_CWORD -eq 1 ]; then
+ COMPREPLY=( $(compgen -W "$cmds" -- "$cur") )
+ return 0
+ fi
+
+ # If we're completing an option, show options
+ if [[ "$cur" == -* ]]; then
+ COMPREPLY=( $(compgen -W "$opts" -- "$cur") )
+ return 0
+ fi
+
+ # For other cases, complete with files/directories
+ COMPREPLY=( $(compgen -f -- "$cur") )
+ return 0
+}
+
+# Register the completion function
+complete -F _devrd_completion devrd
+complete -F _devrd_completion ./devrd
+complete -F _devrd_completion ../devrd
\ No newline at end of file
diff --git a/devcontainer/template/.devcontainer/devrd-completion.zsh b/devcontainer/template/.devcontainer/devrd-completion.zsh
new file mode 100755
index 0000000..1015177
--- /dev/null
+++ b/devcontainer/template/.devcontainer/devrd-completion.zsh
@@ -0,0 +1,89 @@
+#!/bin/zsh
+
+# Zsh completion for devrd script
+# Source this file or add to your .zshrc to enable completion
+
+_devrd_completion() {
+ local context state line
+ typeset -A opt_args
+
+ # Available commands
+ local commands=(
+ 'env:Generate .env file with repository information'
+ 'build:Build the development container'
+ 'start:Start containers (profiles: mqtt, ocpp, sil)'
+ 'stop:Stop containers (profiles: mqtt, ocpp, sil)'
+ 'purge:Remove all devcontainer resources (containers, images, volumes)'
+ 'exec:Execute a command in the container'
+ 'prompt:Get a shell prompt in the container'
+ 'flows:List available flows'
+ 'flow:Switch to specific flow file'
+ )
+
+ # Available options
+ local options=(
+ '-v[Everest tool branch]:version:'
+ '--version[Everest tool branch]:version:'
+ '-w[Workspace directory]:directory:_files -/'
+ '--workspace[Workspace directory]:directory:_files -/'
+ '--help[Display help message]'
+ )
+
+ # Function to get available Node-RED flows dynamically
+ _get_nodered_flows() {
+ # Get the current project name (same logic as devrd script)
+ local project_name="${DOCKER_COMPOSE_PROJECT_NAME:-$(basename "$(pwd)")_devcontainer}"
+
+ # Check if we're in the right directory and container is running
+ if [ -f "devrd" ] && docker compose -p "$project_name" -f .devcontainer/docker-compose.yml -f .devcontainer/general-devcontainer/docker-compose.devcontainer.yml ps devcontainer | grep -q "Up"; then
+ # Get flows from the container and return full paths (relative to workspace)
+ docker compose -p "$project_name" -f .devcontainer/docker-compose.yml -f .devcontainer/general-devcontainer/docker-compose.devcontainer.yml exec -T devcontainer find /workspace -name "*-flow.json" -type f 2>/dev/null | sed 's|/workspace/||' | sort
+ else
+ # Fallback to common flow file paths
+ echo "everest-core/config/nodered/config-sil-dc-flow.json"
+ echo "everest-core/config/nodered/config-sil-dc-bpt-flow.json"
+ echo "everest-core/config/nodered/config-sil-energy-management-flow.json"
+ echo "everest-core/config/nodered/config-sil-two-evse-flow.json"
+ echo "everest-core/config/nodered/config-sil-flow.json"
+ fi
+ }
+
+ # Function to get available container names
+ _get_container_names() {
+ echo "mqtt ocpp sil"
+ }
+
+ # Main completion logic
+ _arguments -C \
+ "$options[@]" \
+ "1: :{_describe 'commands' commands}" \
+ "*::arg:->args"
+
+ case $state in
+ args)
+ case $line[1] in
+ flow)
+ _values 'flow files' $(_get_nodered_flows)
+ ;;
+ start|stop)
+ _values 'profiles' $(_get_container_names)
+ ;;
+ exec)
+ _values 'commands' 'ls' 'pwd' 'cd' 'cmake' 'ninja' 'make'
+ ;;
+ purge)
+ _files
+ ;;
+ esac
+ ;;
+ esac
+}
+
+# Register the completion function
+if command -v compdef >/dev/null 2>&1; then
+ compdef _devrd_completion devrd
+ compdef _devrd_completion ./devrd
+ compdef _devrd_completion ../devrd
+else
+ echo "Warning: zsh completion system not loaded. Add 'autoload -U compinit && compinit' to your .zshrc"
+fi
diff --git a/devcontainer/template/.devcontainer/docker-compose.yml b/devcontainer/template/.devcontainer/docker-compose.yml
index 6651755..290534c 100644
--- a/devcontainer/template/.devcontainer/docker-compose.yml
+++ b/devcontainer/template/.devcontainer/docker-compose.yml
@@ -8,33 +8,58 @@ services:
mqtt-server:
image: ghcr.io/everest/everest-dev-environment/mosquitto:docker-images-v0.2.0
ports:
- # allow multiple ports for host to avoid conflicts with other dev environments
- - 1883-1983:1883
- - 9001-9101:9001
+ - 1883:1883
+ - 9001:9001
+ profiles:
+ - all
+ - mqtt
+ - ocpp
+ - sil
ocpp-db:
image: mariadb:10.4.30 # pinned to patch-version because https://github.com/steve-community/steve/pull/1213
volumes:
- ocpp-db-data:/var/lib/mysql
ports:
- # allow multiple ports for host to avoid conflicts with other dev environments
- - 13306-13406:3306
+ - 13306:3306
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: ocpp-db
MYSQL_USER: ocpp
MYSQL_PASSWORD: ocpp
+ profiles:
+ - all
+ - ocpp
steve:
image: ghcr.io/everest/everest-dev-environment/steve:docker-images-v0.2.0
ports:
- # allow multiple ports for host to avoid conflicts with other dev environments
- - 8180-8280:8180
- - 8443-8543:8443
+ - 8180:8180
+ - 8443:8443
depends_on:
- ocpp-db
+ profiles:
+ - all
+ - ocpp
mqtt-explorer:
image: ghcr.io/everest/everest-dev-environment/mqtt-explorer:docker-images-v0.2.0
depends_on:
- mqtt-server
ports:
- - 4000-4100:4000
+ - 4000:4000
+ profiles:
+ - all
+ - sil
+ nodered:
+ image: ghcr.io/everest/everest-dev-environment/nodered:docker-images-v0.2.0
+ ports:
+ - 1880:1880
+ volumes:
+ - node-red-data:/data
+ - ./nodered-settings.js:/data/settings.js
+ environment:
+ - NODE_RED_ENABLE_PROJECTS=false
+ - MQTT_BROKER=mqtt-server
+ - MQTT_PORT=1883
+ profiles:
+ - all
+ - sil
diff --git a/devcontainer/template/.devcontainer/general-devcontainer/Dockerfile b/devcontainer/template/.devcontainer/general-devcontainer/Dockerfile
index 53d23b4..e669862 100644
--- a/devcontainer/template/.devcontainer/general-devcontainer/Dockerfile
+++ b/devcontainer/template/.devcontainer/general-devcontainer/Dockerfile
@@ -1,17 +1,75 @@
# syntax=docker/dockerfile:1
FROM ghcr.io/everest/everest-ci/dev-env-base:v1.5.4
-# Update the package list
-RUN sudo apt update
+# Build arguments for customization
+ARG USERNAME=docker
+ARG USER_UID=1000
+ARG USER_GID=1000
+ARG EVEREST_TOOL_BRANCH=main
+ARG ORGANIZATION_ARG=EVerest
+ARG REPOSITORY_HOST=github.com
+ARG REPOSITORY_USER=git
+
+# Update the package list and install dependencies (run as root)
+USER root
+RUN apt-get update && apt-get install -y \
+ protobuf-compiler \
+ libsystemd-dev \
+ tmux \
+ chrpath \
+ cpio \
+ diffstat \
+ gawk \
+ wget \
+ zstd \
+ liblz4-tool \
+ file \
+ && DEBIAN_FRONTEND=noninteractive apt-get install -y locales \
+ && sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen \
+ && dpkg-reconfigure --frontend=noninteractive locales \
+ && update-locale LANG=en_US.UTF-8 \
+ && apt-get clean \
+ && rm -rf /var/lib/apt/lists/*
+
+ENV LANG en_US.UTF-8
+
+
+# Modify the existing docker user and group to match the host's USER_UID and USER_GID
+RUN groupmod --gid ${USER_GID} ${USERNAME} && \
+ usermod --uid ${USER_UID} --gid ${USER_GID} ${USERNAME} && \
+ usermod -aG dialout ${USERNAME}
+
+# Switch to the docker user
+USER ${USERNAME}
+WORKDIR /home/${USERNAME}
+
+# Add known hosts for git repositories
+RUN mkdir -p -m 0600 ~/.ssh && ssh-keyscan ${REPOSITORY_HOST} >> ~/.ssh/known_hosts
# EVerest Development Tool - Dependencies
RUN pip install --break-system-packages \
- docker==7.1.0
-# EVerest Development Tool
-ARG DEV_ENV_TOOL_VERSION=everest-dev-tool-v0.1.0
+ nanopb
+
+# EVerest Development Tool with cache busting based on commit hash
RUN python3 -m pip install --break-system-packages \
- git+https://github.com/EVerest/everest-dev-environment@${DEV_ENV_TOOL_VERSION}#subdirectory=everest_dev_tool
+ git+https://github.com/EVerest/everest-dev-environment@${EVEREST_TOOL_BRANCH}#subdirectory=everest_dev_tool
+
+# Set environment variables
+ENV EVEREST_DEV_TOOL_DEFAULT_GIT_METHOD=ssh
+ENV EVEREST_DEV_TOOL_DEFAULT_GIT_HOST=${REPOSITORY_HOST}
+ENV EVEREST_DEV_TOOL_DEFAULT_GIT_SSH_USER=${REPOSITORY_USER}
+ENV EVEREST_DEV_TOOL_DEFAULT_GIT_ORGANIZATION=${ORGANIZATION_ARG}
+ENV LD_LIBRARY_PATH=/usr/local/lib
+
+# Set up welcome message
+RUN echo "echo \"🏔️ 🚘 Welcome to the EVerest development environment!\"" >> ${HOME}/.bashrc && \
+ echo "echo \"To initialize the EVerest core repository everest-core in your workspace please run the following command:\"" >> ${HOME}/.bashrc && \
+ echo "echo \"everest clone everest-core\"" >> ${HOME}/.bashrc && \
+ echo "cd /workspace" >> ${HOME}/.bashrc
-RUN echo "echo \"🏔️ 🚘 Welcome to the EVerest development environment!\"" >> ${HOME}/.bashrc
-RUN echo "echo \"To initialize the EVerest core repository everest-core in your workspace please run the following command:\"" >> ${HOME}/.bashrc
-RUN echo "echo \"everest clone everest-core\"" >> ${HOME}/.bashrc
+# Enable command line completion for devrd script by default
+RUN echo "" >> ${HOME}/.bashrc && \
+ echo "# Enable devrd command completion if available" >> ${HOME}/.bashrc && \
+ echo "if [ -f /workspace/.devcontainer/devrd-completion.bash ]; then" >> ${HOME}/.bashrc && \
+ echo " source /workspace/.devcontainer/devrd-completion.bash" >> ${HOME}/.bashrc && \
+ echo "fi" >> ${HOME}/.bashrc
\ No newline at end of file
diff --git a/devcontainer/template/.devcontainer/general-devcontainer/devcontainer.json b/devcontainer/template/.devcontainer/general-devcontainer/devcontainer.json
index c5f786b..4b73359 100644
--- a/devcontainer/template/.devcontainer/general-devcontainer/devcontainer.json
+++ b/devcontainer/template/.devcontainer/general-devcontainer/devcontainer.json
@@ -2,7 +2,7 @@
"name": "EVerest - ${localWorkspaceFolderBasename}",
"dockerComposeFile": ["../docker-compose.yml", "./docker-compose.devcontainer.yml"],
"service": "devcontainer",
- "runServices": ["devcontainer"],
+
"remoteUser": "docker",
"workspaceFolder": "/workspace",
"forwardPorts": [
diff --git a/devcontainer/template/.devcontainer/general-devcontainer/docker-compose.devcontainer.yml b/devcontainer/template/.devcontainer/general-devcontainer/docker-compose.devcontainer.yml
index 3b205a1..813e63c 100644
--- a/devcontainer/template/.devcontainer/general-devcontainer/docker-compose.devcontainer.yml
+++ b/devcontainer/template/.devcontainer/general-devcontainer/docker-compose.devcontainer.yml
@@ -7,6 +7,8 @@ volumes:
name: everest-cpm-source-cache
services:
+
+
docker-proxy:
image: tecnativa/docker-socket-proxy:latest
volumes:
@@ -21,28 +23,51 @@ services:
- VOLUMES=1
networks:
- docker-proxy-network
+ profiles:
+ - all
+ - ocpp
+ - sil
devcontainer:
depends_on:
- docker-proxy
+ env_file:
+ - .env
build:
context: ./general-devcontainer
dockerfile: Dockerfile
+ args:
+ USER_UID: ${UID:-1000}
+ USER_GID: ${GID:-1000}
+ ORGANIZATION_ARG: ${ORGANIZATION_ARG:-EVerest}
+ REPOSITORY_HOST: ${REPOSITORY_HOST:-github.com}
+ REPOSITORY_USER: ${REPOSITORY_USER:-git}
+ EVEREST_TOOL_BRANCH: ${EVEREST_TOOL_BRANCH:-main}
volumes:
- type: bind
- source: ..
+ source: ${HOST_WORKSPACE_FOLDER:-..}
target: /workspace
- type: volume
source: cpm-source-cache
target: /home/docker/.cache/cpm
+ # Mount the host's SSH agent socket into the container
+ - type: bind
+ source: ${SSH_AUTH_SOCK}
+ target: /ssh-agent
command: sleep infinity
environment:
MQTT_SERVER_ADDRESS: mqtt-server
MQTT_SERVER_PORT: 1883
DOCKER_HOST: tcp://docker-proxy:2375
CPM_SOURCE_CACHE: /home/docker/.cache/cpm
+ # Tell SSH to use the forwarded agent
+ SSH_AUTH_SOCK: /ssh-agent
networks:
- docker-proxy-network
- default
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
+ profiles:
+ - all
+ - ocpp
+ - sil
diff --git a/devcontainer/template/.devcontainer/nodered-settings.js b/devcontainer/template/.devcontainer/nodered-settings.js
new file mode 100644
index 0000000..89bbf83
--- /dev/null
+++ b/devcontainer/template/.devcontainer/nodered-settings.js
@@ -0,0 +1,20 @@
+module.exports = {
+ // Flow file location
+ flowFile: 'flows.json',
+
+ // Enable projects
+ enableProjects: process.env.NODE_RED_ENABLE_PROJECTS === 'true',
+
+ // HTTP settings
+ httpNodeRoot: '/',
+ httpAdminRoot: '/',
+
+ // Logging
+ logging: {
+ console: {
+ level: "info",
+ metrics: false,
+ audit: false
+ }
+ }
+};
diff --git a/doc/REQUIREMENTS.md b/doc/REQUIREMENTS.md
new file mode 100644
index 0000000..69c3040
--- /dev/null
+++ b/doc/REQUIREMENTS.md
@@ -0,0 +1,264 @@
+# Docker-Based Workspace & Tooling — Requirements Specification
+
+Provide a lightweight, Docker-based workspace that runs both inside and outside Visual Studio Code (VS Code), orchestrates a set of containers (MQTT Explorer, OCPP/steve, Dev/Build, Node-RED), and includes a small setup tool to build, run, stop, purge, and interact with the environment. The dev container MUST support mounting a host folder as the workspace and allow building EVerest and running SIL.
+
+---
+
+## Scope
+
+A minimal-dependency environment for development and simulation that:
+
+- Works **with** VS Code (Dev Containers) and **without** VS Code (CLI).
+- Runs the following containers: **MQTT Explorer**, **OCPP (steve + deps)**, **Dev/Build container**, **Node-RED for simulation**.
+- Provides a **setup tool** to manage lifecycle and developer workflows.
+- Supports **EVerest build** and **SIL execution** inside the dev container.
+
+---
+
+## Definitions
+
+- **Dev container**: Linux container used for building/running code (incl. EVerest + SIL).
+- **Setup tool**: a small CLI (Bash or Python stdlib) wrapping `docker compose` and other commands.
+- **SIL**: Software-in-the-Loop execution of EVerest components.
+- **Profiles**: `docker compose` profiles enabling/disabling groups of services.
+
+---
+
+## Target Platforms
+
+- **MUST**: Linux x86_64
+- **SHOULD**: Windows 11 (WSL2) and macOS (Apple Silicon)
+
+---
+
+## Functional Requirements
+
+### F1. Minimal Dependencies
+
+- **MUST** require only:
+ 1. Docker Engine (or Docker Desktop)
+ 2. Docker Compose V2
+- **Optional** (editor integration): VS Code + “Dev Containers” extension.
+- **MUST NOT** require global host package managers (Node, Python, Java, etc.).
+
+**Acceptance:** On a clean host with only Docker/Compose, the setup tool builds and launches all profiles.
+
+---
+
+### F2. VS Code Integration
+
+- Include `.devcontainer/` config enabling “Reopen in Container”.
+- Dev container **MUST** expose project as `/workspace` (see F6).
+- System **MUST** be usable without VS Code (see F3).
+
+**Acceptance:** Opening the repo in VS Code and selecting “Reopen in Container” starts the dev container with mounted workspace.
+
+---
+
+### F3. CLI Operation (Outside VS Code)
+
+- All features **MUST** be accessible via the setup tool without VS Code.
+
+**Acceptance:** Every operation in F4/F5 is executable from a terminal with no editor running.
+
+---
+
+### F4. Supported Containers
+
+Provide Compose services (each with its own profile):
+
+- **mqtt-explorer** — MQTT Explorer UI container
+- **ocpp-steve** — OCPP server (steve) + dependencies (e.g., DB)
+- **dev** — Build/dev container (toolchain for EVerest + SIL)
+- **nodered** — Node-RED for simulation
+- **Zellij or TMUX** — Possibility to see the logs of the other containers (low prio for now)
+
+**Acceptance:** `docker compose --profile up` starts the corresponding service(s).
+
+---
+
+### F5. Setup Tool Capabilities
+
+Two separate CLI tools **MUST** be provided:
+
+**a) Installation Tool (`setup`)**
+
+- Runs installation directly when executed (no command needed)
+
+**b) Development Yard Tool (`devrd`)**
+
+**Start/Stop**
+
+- `start [mqtt|ocpp|dev|nodered|all]`
+- `stop [mqtt|ocpp|dev|nodered|all]`
+
+**Build/Purge**
+
+- `build` builds images.
+- `purge` removes containers/networks and optionally images/volumes.
+
+**Exec/Shell in Dev Container**
+
+- `exec ""` runs a shell command in the **running** dev container.
+- `prompt` attaches an interactive shell to the dev container.
+
+**Select Node-RED Flow**
+
+- `flows` displays the available flows for the Node-RED container
+- `set-flow ` selects the flow for the Node-RED container via bind-mount or env switch.
+- Selection **MUST** persist across restarts (volume or config).
+
+**Version checking**
+
+- version check - the script should check the version of the docker tools it is using (docker compose). In case of major difference shall display an warning message.
+
+**Acceptance:** Each subcommand functions as specified and returns non-zero on error.
+
+---
+
+### F6. Dev Container Workspace Mount
+
+- **MUST** mount a host folder as `/workspace` (read-write).
+- Host path **MUST** be configurable (default: current folder).
+- File changes **MUST** be reflected both ways.
+
+**Acceptance:** Creating a file on the host appears in `/workspace` and vice versa.
+
+---
+
+### F7. EVerest Build & SIL
+
+- Dev container **MUST** include prerequisites to build EVerest.
+- Dev container **COULD** include prerequisites to build an yocto image containing EVerest.
+
+**Acceptance:** On a fresh checkout, a build of EVerest completes successfully (network access assumed for deps).
+
+---
+
+## Non-Functional Requirements
+
+### N1. Simplicity
+
+Repository **SHOULD** contain:
+
+- `.devcontainer/docker-compose.yml` (main compose file)
+- `.devcontainer/general-devcontainer/` (devcontainer configuration)
+- `setup` (≤ ~100 LOC, installation only)
+- `devrd` (≤ ~600 LOC, development yard management)
+- `README.md` (quickstart, troubleshooting)
+- Documentation files in `doc/` directory
+
+Both tools **SHOULD NOT** depend on non-standard Python packages; if Python is used, restrict to the standard library and standard scripting. If Bash, rely only on POSIX tools.
+
+---
+
+### N2. Isolation & Persistence
+
+- Use a dedicated Docker network; services **MUST** be reachable by service name.
+- Persist appropriate data via named volumes (e.g., Node-RED data, steve DB).
+- `devrd purge` **MUST** delete persisted data.
+- Docker Compose project naming uses current folder name with `_devcontainer` suffix for consistency with VS Code.
+
+---
+
+### N3. Configuration
+
+- All tunables (ports, flow path, workspace path) **MUST** be configurable via `.devcontainer/.env` and overridable via env vars/CLI flags.
+- Environment configuration is auto-generated by `devrd env` command with sensible defaults.
+- Workspace mapping is configurable via `devrd env -w ` option.
+
+---
+
+### N4. Observability
+
+- `devrd start` **SHOULD** show per-service state, mapped ports, and container names.
+- `devrd flows` **SHOULD** show Node-RED container status and list available flows.
+- Container services summary displays actual port mappings and service URLs.
+
+---
+
+### N5. Security Baseline
+
+- Containers **MUST NOT** run as root unless required; prefer an unprivileged user for dev.
+- Avoid `--privileged` unless strictly necessary.
+- Default ports **SHOULD** bind to localhost unless cross-host access is intended.
+- SSH agent integration for Git operations with proper key management.
+
+---
+
+## Suggested Compose Structure (High-Level)
+
+- **Profiles**
+ - `mqtt` → MQTT Server
+ - `ocpp` → MQTT Server, OCPP DB, Steve
+ - `sil` → MQTT Server, Node-RED, MQTT Explorer
+
+- **Volumes**
+ - `nodered_data`, `steve_db`
+
+- **Network**
+ - Default Docker network (services reachable by name)
+
+- **Project Naming**
+ - `{workspace-folder-name}_devcontainer` (consistent with VS Code)
+
+---
+
+## CLI Sketch
+
+```bash
+# Installation (one-time setup)
+./setup
+
+# Development environment management
+./devrd start [dev|mqtt|ocpp|nodered|all]
+./devrd stop [dev|mqtt|ocpp|nodered|all]
+./devrd build [--all|service]
+./devrd purge [service|all] [--with-images] [--with-volumes]
+./devrd exec ""
+./devrd shell
+./devrd set-flow ./flows/sim_fast.flow.json
+./devrd status
+./devrd logs [service] [--follow]
+./devrd build-everest
+./devrd run-sil [args...]
+```
+
+---
+
+## Defaults (Editable via `.devcontainer/.env`)
+
+```
+ORGANIZATION_ARG=EVerest
+REPOSITORY_HOST=github.com
+REPOSITORY_USER=git
+COMMIT_HASH=
+EVEREST_TOOL_BRANCH=main
+UID=
+GID=
+HOST_WORKSPACE_FOLDER=
+```
+
+---
+
+## Out-of-the-Box User Journeys
+
+1. **Run everything in VS Code**
+ `./setup` → Open repo → Reopen in Container → `./devrd start all` → open Node-RED/MQTT Explorer in browser.
+
+2. **Run headless (no VS Code)**
+ `./setup` → `./devrd start dev` → `./devrd build-everest` → `./devrd run-sil --scenario basic`
+
+3. **Switch Node-RED flow**
+ `./devrd set-flow ./flows/ac_slow.flow.json` → `./devrd stop nodered && ./devrd start nodered`
+
+---
+
+*If you want, I can turn this into a ready-to-use `docker-compose.yml`, `.devcontainer/devcontainer.json`, and a tiny `tool.sh` next.*
+
+**Decision**:
+The implementation will be having 3 parts:
+
+- a bash script `setup` for the installation of the dev container
+- a pure bash script `devrd` (development yard) that will handle the instrumentation of the containers (build, start, stop, prompt, etc)
+- a Python script for the convenience additional command and aliases (INSIDE the devcontainer)
diff --git a/everest_dev_tool/src/everest_dev_tool/parser.py b/everest_dev_tool/src/everest_dev_tool/parser.py
index aad5de9..e35c0b7 100644
--- a/everest_dev_tool/src/everest_dev_tool/parser.py
+++ b/everest_dev_tool/src/everest_dev_tool/parser.py
@@ -2,7 +2,7 @@
import logging
import os
-from . import services, git_handlers
+from . import git_handlers
log = logging.getLogger("EVerest's Development Tool")
@@ -15,29 +15,6 @@ def get_parser(version: str) -> argparse.ArgumentParser:
subparsers = parser.add_subparsers(help="available commands")
- # Service related commands
- services_parser = subparsers.add_parser("services", help="Service related commands", add_help=True)
- services_parser.add_argument('-v', '--verbose', action='store_true', help="Verbose output")
- services_subparsers = services_parser.add_subparsers(help="Service related commands")
-
- start_service_parser = services_subparsers.add_parser("start", help="Start a service", add_help=True)
- start_service_parser.add_argument('-v', '--verbose', action='store_true', help="Verbose output")
- start_service_parser.add_argument("service_name", help="Name of Service to start")
- start_service_parser.set_defaults(action_handler=services.start_service_handler)
-
- stop_service_parser = services_subparsers.add_parser("stop", help="Stop a service", add_help=True)
- stop_service_parser.add_argument('-v', '--verbose', action='store_true', help="Verbose output")
- stop_service_parser.add_argument("service_name", help="Name of Service to stop")
- stop_service_parser.set_defaults(action_handler=services.stop_service_handler)
-
- services_info_parser = services_subparsers.add_parser("info", help="Show information about the current environment", add_help=True)
- services_info_parser.add_argument('-v', '--verbose', action='store_true', help="Verbose output")
- services_info_parser.set_defaults(action_handler=services.info_handler)
-
- list_services_parser = services_subparsers.add_parser("list", help="List all available services", add_help=True)
- list_services_parser.add_argument('-v', '--verbose', action='store_true', help="Verbose output")
- list_services_parser.set_defaults(action_handler=services.list_services_handler)
-
# Git related commands
clone_parser = subparsers.add_parser("clone", help="Clone a repository", add_help=True)
clone_parser.add_argument('-v', '--verbose', action='store_true', help="Verbose output")
diff --git a/everest_dev_tool/src/everest_dev_tool/services.py b/everest_dev_tool/src/everest_dev_tool/services.py
deleted file mode 100644
index 25df57b..0000000
--- a/everest_dev_tool/src/everest_dev_tool/services.py
+++ /dev/null
@@ -1,207 +0,0 @@
-import argparse
-import logging
-import os,sys
-import subprocess
-from dataclasses import dataclass
-from typing import List
-import docker
-import enum
-
-@dataclass
-class DockerEnvironmentInfo:
- container_id: str | None = None
- container_name: str | None = None
-
- container_image: str | None = None
- container_image_id: str | None = None
- container_image_digest: str | None = None
-
- compose_files: List[str] | None = None
- compose_project_name: str | None = None
-
- in_docker_container: bool = False
-
-@dataclass
-class DockerComposeCommand:
- class Command(enum.Enum):
- UP = "up"
- DOWN = "down"
- PS = "ps"
- compose_files: List[str]
- project_name: str
- command: Command
- services: List[str] | None = None
- def execute_command(self, log: logging.Logger):
- command_list = ["docker", "compose"]
- for compose_file in self.compose_files:
- command_list.extend(["-f", compose_file])
- command_list.extend(["-p", self.project_name])
- if self.command == DockerComposeCommand.Command.UP:
- command_list.extend(["up", "-d"])
- command_list.extend(self.services)
- elif self.command == DockerComposeCommand.Command.DOWN:
- command_list.extend(["down"])
- command_list.extend(self.services)
- elif self.command == DockerComposeCommand.Command.PS:
- command_list.extend(["ps"])
- else:
- log.error(f"Unknown command {self.command}")
- return
- log.debug(f"Executing command: {' '.join(command_list)}")
- subprocess.run(command_list, check=True)
-
-@dataclass
-class Service:
- """Class to represent a service"""
- name: str
- description: str
- start_command: List[str] | DockerComposeCommand
- stop_command: List[str] | DockerComposeCommand
-
-####################
-# Helper functions #
-####################
-
-def get_docker_environment_info(log: logging.Logger) -> DockerEnvironmentInfo:
- dei = DockerEnvironmentInfo()
-
- # Check if we are running in a docker container
- if not os.path.exists("/.dockerenv"):
- log.debug("Not running in Docker Container")
- dei.in_docker_container = False
- return dei
-
- log.debug("Running in Docker Container")
-
- dei.in_docker_container = True
-
- # Get the container information
- dei.container_id = subprocess.run(["hostname"], stdout=subprocess.PIPE).stdout.decode().strip()
- client = docker.from_env()
- dei.container_name = client.containers.get(dei.container_id).name
-
- # Get the image information
- dei.container_image = client.containers.get(dei.container_id).image.tags[0]#
- dei.container_image_id = client.containers.get(dei.container_id).image.id
- dei.container_image_digest = client.images.get(dei.container_image_id).id
-
- # Get the compose information
- if not os.path.exists("/workspace/.devcontainer/docker-compose.yml"):
- log.error("docker-compose.yml not found in /workspace/.devcontainer")
- sys.exit(1)
- dei.compose_files = ["/workspace/.devcontainer/docker-compose.yml"]
-
- # Check if the container is part of a docker-compose project
- if "com.docker.compose.project" not in client.containers.get(dei.container_id).attrs["Config"]["Labels"]:
- log.error("Container is not part of a docker-compose project")
- sys.exit(1)
-
- dei.compose_project_name = client.containers.get(dei.container_id).attrs["Config"]["Labels"]["com.docker.compose.project"]
-
- return dei
-
-def get_services(docker_env_info: DockerEnvironmentInfo, log: logging.Logger) -> List[Service]:
- return [
- Service(
- name="mqtt-server",
- description="MQTT Server",
- start_command=DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- services=["mqtt-server"],
- command=DockerComposeCommand.Command.UP
- ),
- stop_command=DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- services=["mqtt-server"],
- command=DockerComposeCommand.Command.DOWN
- )
- ),
- Service(
- name="steve",
- description="OCPP server for development of OCPP 1.6",
- start_command=DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- services=["steve"],
- command=DockerComposeCommand.Command.UP
- ),
- stop_command=DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- services=["steve", "ocpp-db"],
- command=DockerComposeCommand.Command.DOWN
- )
- ),
- Service(
- name="mqtt-explorer",
- description="Web based MQTT Client to inspect mqtt traffic",
- start_command=DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- services=["mqtt-explorer"],
- command=DockerComposeCommand.Command.UP
- ),
- stop_command=DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- services=["mqtt-explorer"],
- command=DockerComposeCommand.Command.DOWN
- )
- )
- ]
-
-def get_service_by_name(service_name: str, docker_env_info: DockerEnvironmentInfo, log: logging.Logger) -> Service:
- return next((service for service in get_services(docker_env_info, log) if service.name == service_name), None)
-
-############
-# Handlers #
-############
-
-def start_service_handler(args: argparse.Namespace):
- log = args.logger
- docker_env_info = get_docker_environment_info(log)
- service = get_service_by_name(args.service_name, docker_env_info, log)
- if service is None:
- log.error(f"Service {args.service_name} not found, try 'everest services list' to get a list of available services")
- return
-
- log.info(f"Starting service {service.name}")
- if isinstance(service.start_command, DockerComposeCommand):
- service.start_command.execute_command(log)
- else:
- subprocess.run(service.start_command, check=True)
-
-def stop_service_handler(args: argparse.Namespace):
- log = args.logger
- docker_env_info = get_docker_environment_info(log)
- service = get_service_by_name(args.service_name, docker_env_info, log)
- if service is None:
- log.error(f"Service {args.service_name} not found, try 'everest services list' to get a list of available services")
- return
-
- log.info(f"Stopping service {service.name}")
- if isinstance(service.stop_command, DockerComposeCommand):
- service.stop_command.execute_command(log)
- else:
- subprocess.run(service.stop_command, check=True)
-
-def list_services_handler(args: argparse.Namespace):
- log = args.logger
- docker_env_info = get_docker_environment_info(log)
- log.info("Available services:")
- for service in get_services(docker_env_info, log):
- log.info(f"{service.name}: {service.description}")
- log.debug(f"Start Command: {service.start_command}")
- log.debug(f"Stop Command: {service.stop_command}")
-
-def info_handler(args: argparse.Namespace):
- log = args.logger
- docker_env_info = get_docker_environment_info(log)
- command = DockerComposeCommand(
- compose_files=docker_env_info.compose_files,
- project_name=docker_env_info.compose_project_name,
- command=DockerComposeCommand.Command.PS
- )
- command.execute_command(log)