Create a Kubernetes cluster based on Rancher's k3s project with distinct hosts for the control plane and worker nodes on a bare metal machine running Linux and libvirt + qemu/kvm.
- A bare metal machine with
- at least 16GB memory (for 1 master, 2 worker nodes), 32GB recommended)
- a CPU with at least 4 physical cores
- with any Linux distro (this guide assumes Ubuntu 18.04)
- Install libvirt, qemu/kvm. E.g., on Ubuntu:
apt-get install -qy qemu-kvm libvirt-bin virtinst python3 vagrant
- Start the libvirt daemon. Using either virsh or virt-manager, create a virtual network:
virsh -c qemu:///system net-create libvirt/default_network.xml
- Install the
vagrant-libvirtplugin:vagrant plugin install vagrant-libvirt
- Initialize and install the virtualenv:
python -m venv venv pip install -r requirements.txt
vagrant upansible-playbook -i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory create-k3s-cluster.yml- Install
kubectlon the local machine - Configure it
mkdir -p ~/.kube && vagrant ssh master -c 'sudo cat .kube/config' > ~/.kube/config
- Run
kubectl# kubectl cluster-info Kubernetes master is running at https://192.168.122.127:6443 CoreDNS is running at https://192.168.122.127:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://192.168.122.127:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. # kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 149m v1.19.3+k3s2 worker-3 Ready <none> 139m v1.19.3+k3s2 worker-2 Ready <none> 139m v1.19.3+k3s2 worker-1 Ready <none> 139m v1.19.3+k3s2
The git repository monachus/channel.git by Adrian Goins contains kustomization files for deploying a demo application that tests cluster ingress and load balancing. Those files have been made available in this repo.
First, apply the demo project:
kubectl apply -k ./cluster-demoGet the EXTERNAL-IP of the traefik ingress controller by running
kubectl get services traefik -n kube-system, and the hostname of the demo ingress
kubectl get ingresses.v1.networking.k8s.io rancher-demo, field 'HOSTS'
Define a DNS lookup in your /etc/hosts where the hostname above is resolved to the EXTERNAL-IP address. Open that
hostname in your local browser. You should see a webpage that pings one of the three replicas of the demo
project service.
Remove the test with
kubectl delete -k ./cluster-demoThe number of nodes provisioned is defined in the Vagrantfile:
NUM_NODES = 3
Update the number to get more or less worker nodes.
The number of CPUs and memory allotted for each VM can be change in the section config.vm.provider
of the Vagrantfile:
config.vm.provider :libvirt do |libvirt|
libvirt.cpus = 2
libvirt.memory = 4096
...
endRun
vagrant destroy- Ansible - Simple, agentless IT automation that anyone can use
- k3s - Lightweight Kubernetes, the certified Kubernetes distribution built for IoT & Edge computing
- Vagrant - Development Environments Made Easy.
- This project is based on the work of
- The cluster demo project is taken from Adrian Goins' Youtube channel's Gitlab repository