Metadata provider for OpenStack Ironic Standalone - A lightweight service that provides OpenStack-compatible metadata endpoints for bare metal nodes managed by Ironic.
- OpenStack Metadata API Compatibility: Provides standard OpenStack metadata endpoints
- Multiple API Endpoints: Supports both OpenStack and EC2-compatible metadata formats
- Automatic Node Discovery: Finds nodes based on client IP addresses
- Configurable: Environment variable-based configuration
- Logging: Structured logging with request tracing
/openstack/latest/meta_data.json- Node metadata/openstack/latest/network_data.json- Network configuration/openstack/latest/user_data- User data (cloud-init)/openstack/latest/vendor_data.json- Vendor-specific data/openstack/latest/vendor_data2.json- Extended vendor data
/latest/meta-data/- EC2-style metadata/latest/user-data- User data
Configure the service using environment variables:
| Variable | Default | Description |
|---|---|---|
IRONIC_URL |
http://localhost:6385 |
Ironic API endpoint |
BIND_ADDR |
169.254.169.254 |
IP address to bind to |
BIND_PORT |
80 |
Port to bind to |
OS_USERNAME |
(empty) | OpenStack username (optional) |
OS_PASSWORD |
(empty) | OpenStack password (optional) |
OS_PROJECT_NAME |
(empty) | OpenStack project name (optional) |
OS_USER_DOMAIN_NAME |
default |
OpenStack user domain (optional) |
OS_REGION_NAME |
(empty) | OpenStack region (optional) |
git clone https://github.com/appkins-org/ironic-metadata.git
cd ironic-metadata
go build -o ironic-metadata ./cmd/ironic-metadata# Set environment variables
export IRONIC_URL=http://your-ironic-api:6385
export BIND_ADDR=169.254.169.254
export BIND_PORT=80
# Run the service
./ironic-metadataFROM golang:1.24-alpine AS builder
WORKDIR /app
COPY . .
RUN go mod download
RUN go build -o ironic-metadata ./cmd/ironic-metadata
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/ironic-metadata .
CMD ["./ironic-metadata"]The service supports OpenStack ConfigDrive functionality, allowing nodes to use pre-configured metadata, network configuration, and user data.
The service follows this priority order when serving metadata:
- ConfigDrive Data: If a node has
instance_info["configdrive"]set, it will use that data first - Dynamic Configuration: Falls back to extracting data from the node's
instance_infofields
The service supports multiple configdrive formats:
- JSON String: Direct JSON configuration in
instance_info["configdrive"] - ISO Image: ConfigDrive ISO files (parsed using gophercloud utilities)
- Map Object: Direct map configuration
Expected configdrive structure:
{
"meta_data": {
"hostname": "node-hostname",
"instance-id": "node-uuid",
"local-hostname": "node-hostname"
},
"user_data": "#cloud-config\npackages:\n - nginx",
"network_data": {
"links": [
{
"id": "eth0",
"type": "physical",
"mtu": 1500
}
],
"networks": [
{
"id": "network0",
"type": "ipv4",
"link": "eth0"
}
]
},
"public_keys": {
"default": "ssh-rsa AAAAB3NzaC1yc2E..."
}
}The service can create ConfigDrive ISOs using gophercloud utilities:
// Example: Create a configdrive ISO
userData := "#cloud-config\npackages:\n - nginx"
metaData := map[string]interface{}{
"hostname": "test-node",
"instance-id": "node-uuid",
}
networkData := map[string]interface{}{
"links": []interface{}{
map[string]interface{}{
"id": "eth0",
"type": "physical",
"mtu": 1500,
},
},
}
isoBytes, err := createConfigDriveISO(userData, networkData, metaData)-
Configure Ironic nodes with the following instance_info fields:
{ "user_data": "#cloud-config\n...", "public_keys": { "default": "ssh-rsa AAAAB3NzaC1yc2E..." }, "network_data": { "links": [...], "networks": [...] } } -
Point nodes to the metadata service by configuring the DHCP server to provide the metadata service IP (169.254.169.254) as a route.
-
Network Configuration: Ensure the metadata service can reach the Ironic API and that deploying nodes can reach the metadata service IP.
- Client Request: A deploying node makes an HTTP request to 169.254.169.254
- IP Matching: The service extracts the client IP and searches Ironic for matching nodes using multiple methods:
- Primary: Direct IP matching in node
instance_info,configdrive, or driver information - Fallback: MAC address lookup via DHCP lease file (
/shared/dnsmasq/dnsmasq.leases) followed by port-to-node matching
- Primary: Direct IP matching in node
- Data Retrieval: Node information is retrieved from Ironic's API
- Response: Appropriate metadata is returned in the requested format
The service uses multiple methods to identify which Ironic node corresponds to a client IP:
-
Direct IP Matching: Searches node configurations for the client IP in:
instance_info.fixed_ips- ConfigDrive network data
- Driver deployment options (IPA API URLs)
- Node name (for testing)
-
DHCP Lease Fallback: When direct IP matching fails:
- Parses DHCP lease file at
/shared/dnsmasq/dnsmasq.leases - Extracts MAC address for the client IP
- Queries Ironic ports API to find the port with matching MAC address
- Returns the node associated with that port
- Parses DHCP lease file at
This two-tier approach ensures compatibility with various Ironic deployment scenarios and provides robust node discovery even when IP information isn't directly stored in node configurations.
curl http://169.254.169.254/openstack/latest/meta_data.jsonResponse:
{
"uuid": "550e8400-e29b-41d4-a716-446655440000",
"name": "node-01",
"hostname": "node-01",
"launch_index": 0,
"public_keys": {
"default": "ssh-rsa AAAAB3NzaC1yc2E..."
},
"meta": {
"memory_mb": "8192",
"cpus": "4"
}
}curl http://169.254.169.254/openstack/latest/network_data.jsonResponse:
{
"links": [
{
"id": "eth0",
"type": "physical",
"mtu": 1500
}
],
"networks": [
{
"id": "network0",
"type": "ipv4",
"link": "eth0"
}
],
"services": []
}curl http://169.254.169.254/openstack/latest/user_data- Go 1.24 or later
- Access to an Ironic API
go mod download
go build ./...go test ./...- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Submit a pull request
MIT License - see LICENSE for details.
- Node not found: Ensure the client IP can be matched to a node in Ironic
- Check that node
instance_infocontains the client IP infixed_ips - Verify DHCP lease file exists at
/shared/dnsmasq/dnsmasq.leasesfor fallback lookup - Ensure ports are correctly configured in Ironic with MAC addresses that match DHCP leases
- Check that node
- Connection refused: Check that Ironic API is accessible and credentials are correct
- Empty responses: Verify that nodes have the required instance_info fields set
The service expects dnsmasq lease format at /shared/dnsmasq/dnsmasq.leases:
1750802648 9c:6b:00:70:59:8b 10.1.105.195 * *
1750802648 9c:6b:00:70:59:8a 10.1.105.194 * *
Format: timestamp mac_address ip_address hostname client_id
Enable debug logging by setting the log level:
export LOG_LEVEL=debugEnsure:
- The metadata service can reach Ironic API
- Deploying nodes can reach 169.254.169.254
- Proper routing is configured for metadata IP
- No firewall blocking access to the metadata service
For local network integration using macvlan networking:
# Quick setup with auto-detection
./scripts/setup-network.sh
docker-compose up -d ironic-metadata
# Manual setup
docker-compose up -d ironic-metadataSee Docker Deployment Guide for detailed setup instructions.
To make 169.254.169.254 globally routable across your network infrastructure, several methods are available:
# Setup NAT routing automatically
sudo ./scripts/setup-advanced-networking.sh nat-routing
# Or specify host IP manually
sudo ./scripts/setup-advanced-networking.sh nat-routing 192.168.1.100# Setup BGP routing with BIRD
sudo ./scripts/setup-advanced-networking.sh bgp-bird 1.1.1.1 65001 192.168.1.1 65000
# Arguments: ROUTER_ID LOCAL_AS PEER_IP PEER_AS# Setup keepalived for HA
sudo ./scripts/setup-advanced-networking.sh keepalived eth0 100 secure_password
# Arguments: INTERFACE PRIORITY AUTH_PASS# Setup HAProxy load balancer
sudo ./scripts/setup-advanced-networking.sh haproxy "192.168.1.10:80,192.168.1.11:80,192.168.1.12:80"# Test metadata service connectivity
./scripts/setup-advanced-networking.sh test
# Manual testing
curl http://169.254.169.254/openstack/latest/meta_data.json
ping 169.254.169.254
traceroute 169.254.169.254For complex networking scenarios including:
- Multi-site deployments
- Cloud provider integrations (AWS, GCP, Azure)
- Container orchestration (Kubernetes)
- Software-defined networking (OpenStack Neutron, VMware NSX-T)
See the Advanced Networking Guide for comprehensive configuration examples.
- Centralized: Single metadata service with global routing
- Distributed: Multiple instances with anycast routing
- Proxy-based: Load balancer with multiple backend services
- Hybrid: Combination of methods for different network segments
Each approach has different trade-offs in terms of complexity, availability, and performance. Choose based on your infrastructure requirements.