An Ansible-based project for bootstrapping a Kubernetes cluster with kubeadm on bare metal or VMs, primarily targeting Ubuntu hosts.
This repository favors readability, repeatability, and explicit control over convenience. It is not a shortcut or a wrapper around kubeadm; it is a worked example of how to assemble a cluster step by step using Ansible, with as little magic as possible.
This repository is intentionally focused on clarity, determinism, and extensibility rather than being a one‑click “magic installer.” It is designed to be read, adapted, and evolved as your cluster grows.
The primary goal is to provide a solid starting point for building and operating a self‑managed Kubernetes cluster on physical or virtual machines, with the assumptions and tradeoffs made explicit.
In practical terms, this repo is a collection of Ansible playbooks that:
- Prepare Ubuntu-based hosts for Kubernetes
- Install and configure a container runtime and Kubernetes components
- Initialize a control plane with
kubeadm - Join worker nodes in a repeatable, idempotent way
- Provide basic verification and a clean reset path
It is designed to be something you can read end-to-end and understand, not just run and forget.
- An Ansible‑driven kubeadm workflow for bringing up a Kubernetes control plane and joining worker nodes
- Suitable for:
- Homelabs
- Bare‑metal environments
- Air‑gapped or semi‑restricted networks (with adaptation)
- Learning how Kubernetes is actually assembled under the hood
- Structured as incremental, inspectable steps, not a monolithic play
This is not a distribution, installer, or managed platform replacement. It intentionally exposes the mechanics that managed services abstract away.
-
Explicit over implicit
Every significant action is represented in Ansible, with minimal hidden defaults. -
Verification‑first
Preflight checks and probes are used to fail early when prerequisites are not met. -
Idempotent and reversible
Playbooks are written to be safely re‑run. A reset path is included. -
Composable
The cluster bootstrap is split into logical phases so components can be swapped or extended. -
No environment leakage
This repository contains no hostnames, IP addresses, usernames, or secrets tied to any real environment.
.
├── ansible.cfg
├── inventory/
│ └── hosts.ini
├── group_vars/
│ ├── all.yml
│ ├── all.secrets.yml # intentionally excluded / example-only
│ ├── runtime.yml
│ ├── kubernetes.yml
│ ├── kube_cluster.yml
│ ├── cni.calico.yml
│ └── reset.yml
├── playbooks/
│ ├── 00-preflight.yml
│ ├── 10-runtime.yml
│ ├── 15-k8s-preprobe.yml
│ ├── 15-k8s-repo-probe.yml
│ ├── 20-kubernetes-repo.yml
│ ├── 20-kubernetes.yml
│ ├── 30-control-plane-init.yml
│ ├── 35-verify-control-plane.yml
│ ├── 36-controller-kubeconfig.yml
│ ├── 40-worker-join.yml
│ └── 90-reset.yml
└── README.md
Each playbook represents a discrete phase in cluster lifecycle management, making it easier to reason about failure modes and extensions.
This project currently targets Ubuntu LTS hosts (tested against recent 22.04/24.04 releases).
Other distributions may work with adjustments, but Ubuntu is the explicit baseline reflected in package management, defaults, and assumptions.
Additional expectations:
This repository assumes:
- Linux hosts with systemd
- SSH access for Ansible
- A supported container runtime
- Internet access for package installation (unless mirrored)
Exact versions, IP addressing, and credentials are deliberately left to the operator and defined via inventory and group variables.
The inventory is intentionally minimal and example‑driven. You are expected to replace placeholders with values appropriate to your environment.
Example:
[control_plane]
cp1 ansible_host=<CONTROL_PLANE_IP> ansible_user=<SSH_USER>
[workers]
worker1 ansible_host=<WORKER_IP> ansible_user=<SSH_USER>No real addresses or usernames are included in this repository.
High‑level flow:
- Preflight validation
- Runtime install/config
- Kubernetes package setup
kubeadm initon the control plane- Basic health gate (node ready, CoreDNS)
- Join workers
- Optional reset / teardown
The intent is that each step is inspectable and re‑runnable, so you can treat the repo as the source of truth for how the cluster is assembled.
- No secrets are committed
all.secrets.ymlis expected to be provided by the operator- You are responsible for securing:
- SSH access
- kubeconfig distribution
- etcd and API server exposure
GPL‑3.0