Reference: k0sctl installation
sudo systemd-machine-id-setup
sudo su -
mkdir -p /etc/k0s
k0s config create > /etc/k0s/k0sctl.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
user: admin
spec:
hosts:
- ssh:
address: 10.0.0.1 # Update this entry to specify hostname or IP address
user: root
port: 22
keyPath: null # Update this entry to specify the key path
role: controller
- ssh:
address: 10.0.0.2 # Update this entry to specify hostname or IP address
user: root
port: 22
keyPath: null # Update this entry to specify the key path
role: worker
options:
wait:
enabled: true
drain:
enabled: true
gracePeriod: 2m0s
timeout: 5m0s
force: true
ignoreDaemonSets: true
deleteEmptyDirData: true
podSelector: ""
skipWaitForDeleteTimeout: 0s
concurrency:
limit: 30
uploads: 5
evictTaint:
enabled: false
taint: k0sctl.k0sproject.io/evict=true
effect: NoExecute
controllerWorkers: false
k0sctl apply --config /etc/k0s/k0sctl.yaml
k0s kubeconfig > ~/.kubeconfig
chmod 600 ~/.kubeconfig
grep -i port /etc/k0s/k0s.yaml
firewall-cmd --permanent --add-port={8133/tcp,2379/tcp,2380/tcp,10257/tcp,10259/tcp,9443/tcp,8132/tcp,6443/tcp,10249/tcp}
firewall-cmd --reload
watch -n1 'k0s kubectl get all -A; echo; k0s kubectl get node'
Note that you should always have an odd number of controll nodes. Therefore, you should have 1 or 3 or 5 control nodes, depending on the cluster size. For a non-production environment, start wtih 1. For production environments, start with 3. Monitor the kube-apiserver performance. If it starts to show hi CPU utilization, increase the number of CPUs on the controll node(s).
k0s token create --role=controller --expiry=1h > token-file
scp token-file user@<new-controller>:~/
scp k0s.yaml user@<new-controller>:~/
k0s status Version: v1.32.4+k0s.0 Process ID: 109946 Role: controller Workloads: false SingleNode: false
k0s status Version: v1.32.4+k0s.0 Process ID: 91481 Role: worker Workloads: true SingleNode: false Kube-api probing successful: true Kube-api probing last error:
kubectl get nodes
It should look something like this
[garfield@k8s01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01.home.mygarfield.us Ready control-plane 17h v1.31.1 k8s02.home.mygarfield.us Ready <none> 10m v1.31.1 k8s03.home.mygarfield.us Ready <none> 6s v1.31.1