projects:k8s:k8s_setup_with_k0s
Home | clubs :: cloud club :: python_club :: 3D-Printing | projects :: Proxmox | Kubernetes | scripting | utilities | games
Table of Contents
Preperation process - All nodes
Reference: k0s Multi-node installation
- Required prep - Ensure unique system ID. K0s cluster deployment will fail if aren't.
sudo systemd-machine-id-setup
- System prep ( RPM-based distros )
- Download the binary:
- This command requires you to login as root ( not sudo ):
curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh
- This command can run with sudo:
curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo K0S_VERSION=v1.32.4+k0s.0 sh
Initialize cluster - Control nodes only
- Login as root as required by the commands we're using
sudo su -
- Create the directory
mkdir -p /etc/k0s
- Copy or generate the configuration file
- If you already have a configuration file, copy it into the current directory
- If you don't already have a k0s.yaml ( configuration ) file, create a default configuration file
k0s config create > /etc/k0s/k0s.yaml
- The conent will look like the following:
apiVersion: k0s.k0sproject.io/v1beta1 kind: ClusterConfig metadata: name: k0s namespace: kube-system spec: api: address: 192.168.1.21 k0sApiPort: 9443 port: 6443 sans: - 192.168.1.21 controllerManager: {} extensions: helm: concurrencyLevel: 5 installConfig: users: etcdUser: etcd kineUser: kube-apiserver konnectivityUser: konnectivity-server kubeAPIserverUser: kube-apiserver kubeSchedulerUser: kube-scheduler konnectivity: adminPort: 8133 agentPort: 8132 network: clusterDomain: cluster.local dualStack: enabled: false kubeProxy: iptables: minSyncPeriod: 0s syncPeriod: 0s ipvs: minSyncPeriod: 0s syncPeriod: 0s tcpFinTimeout: 0s tcpTimeout: 0s udpTimeout: 0s metricsBindAddress: 0.0.0.0:10249 mode: iptables nftables: minSyncPeriod: 0s syncPeriod: 0s kuberouter: autoMTU: true hairpin: Enabled metricsPort: 8080 nodeLocalLoadBalancing: enabled: false envoyProxy: apiServerBindPort: 7443 konnectivityServerBindPort: 7132 type: EnvoyProxy podCIDR: 10.244.0.0/16 provider: kuberouter serviceCIDR: 10.96.0.0/12 scheduler: {} storage: etcd: peerAddress: 192.168.1.21 type: etcd telemetry: enabled: true
- Find the ports that need to be open
grep -i port /etc/k0s/k0s.yaml
- Open each port listed in the above file. The default are used below:
firewall-cmd --add-port=9443/tcp --permanent firewall-cmd --add-port=6443/tcp --permanent firewall-cmd --add-port=7443/tcp --permanent firewall-cmd --add-port=8133/tcp --permanent firewall-cmd --add-port=8132/tcp --permanent firewall-cmd --add-port=7132/tcp --permanent firewall-cmd --add-port=8080/tcp --permanent firewall-cmd --reload
- Install the controller components
k0s install controller -c /etc/k0s/k0s.yaml
- Start the k0s service
k0s start
- Create a worker node token
k0s token create --role=worker --expiry=100h > token-file
- Transfer the new token file to each worker node
scp token-file user@<worker>:~/
Setup worker node(s) - Worker nodes only
- Login as root as required by the commands we're using
sudo su -
- Move the token file from ther user's home directory to root's home directory
mv /home/<user>/token-file ~/
- Open each port listed in the above file. The default are used below:
firewall-cmd --add-port=9443/tcp --permanent firewall-cmd --add-port=6443/tcp --permanent firewall-cmd --add-port=7443/tcp --permanent firewall-cmd --add-port=8133/tcp --permanent firewall-cmd --add-port=8132/tcp --permanent firewall-cmd --add-port=7132/tcp --permanent firewall-cmd --add-port=8080/tcp --permanent firewall-cmd --reload
- Join the worker node(s) to the cluster
k0s install worker --token-file ./token-file
- Start the worker process
k0s start
Verify the cluster setup/deployment - Control node
- Watch the cluster deployment as they start
watch -n1 'k0s kubectl get all -A; echo; k0s kubectl get node'
Add additional controller nodes to the cluster
Note that you should always have an odd number of controll nodes. Therefore, you should have 1 or 3 or 5 control nodes, depending on the cluster size. For a non-production environment, start wtih 1. For production environments, start with 3. Monitor the kube-apiserver performance. If it starts to show hi CPU utilization, increase the number of CPUs on the controll node(s).
- On the existing controller node
- Create the controller node token
k0s token create --role=controller --expiry=1h > token-file
- Transfer the new token file to each worker node
scp token-file user@<new-controller>:~/
- Transfer the k0s.yaml config file to the new controller node(s)
scp k0s.yaml user@<new-controller>:~/
Check the status of the cluster - Control node output
- Check the status of the cluster
k0s status Version: v1.32.4+k0s.0 Process ID: 109946 Role: controller Workloads: false SingleNode: false
Check the status of the cluster - Worker node output
- Check the status of the cluster
k0s status Version: v1.32.4+k0s.0 Process ID: 91481 Role: worker Workloads: true SingleNode: false Kube-api probing successful: true Kube-api probing last error:
User setup process
Pod network setup process
Highly Available ( multi-node control plane ) cluster setup process
Add worker nodes to the cluster setup
Verify worker nodes have successfully joined the cluster
- Run the following command:
kubectl get nodes
It should look something like this
[garfield@k8s01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01.home.mygarfield.us Ready control-plane 17h v1.31.1 k8s02.home.mygarfield.us Ready <none> 10m v1.31.1 k8s03.home.mygarfield.us Ready <none> 6s v1.31.1
projects/k8s/k8s_setup_with_k0s.txt · Last modified: by 127.0.0.1

