Table of Contents

, , , , , ,

What is Kubernetes ( k8s )?

Preperation process - All nodes

Reference: k0s Multi-node installation

  1. Required prep - Ensure unique system ID. K0s cluster deployment will fail if aren't.
    sudo systemd-machine-id-setup
  2. System prep ( RPM-based distros )
    1. Download the binary:
      1. This command requires you to login as root ( not sudo ):
        curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo sh
      2. This command can run with sudo:
        curl --proto '=https' --tlsv1.2 -sSf https://get.k0s.sh | sudo K0S_VERSION=v1.32.4+k0s.0 sh

Initialize cluster - Control nodes only

  1. Login as root as required by the commands we're using
    sudo su -
  2. Create the directory
    mkdir -p /etc/k0s
  3. Copy or generate the configuration file
    1. If you already have a configuration file, copy it into the current directory
    2. If you don't already have a k0s.yaml ( configuration ) file, create a default configuration file
      k0s config create > /etc/k0s/k0s.yaml
      1. The conent will look like the following:
        apiVersion: k0s.k0sproject.io/v1beta1
        kind: ClusterConfig
        metadata:
          name: k0s
          namespace: kube-system
        spec:
          api:
            address: 192.168.1.21
            k0sApiPort: 9443
            port: 6443
            sans:
            - 192.168.1.21
          controllerManager: {}
          extensions:
            helm:
              concurrencyLevel: 5
          installConfig:
            users:
              etcdUser: etcd
              kineUser: kube-apiserver
              konnectivityUser: konnectivity-server
              kubeAPIserverUser: kube-apiserver
              kubeSchedulerUser: kube-scheduler
          konnectivity:
            adminPort: 8133
            agentPort: 8132
          network:
            clusterDomain: cluster.local
            dualStack:
              enabled: false
            kubeProxy:
              iptables:
                minSyncPeriod: 0s
                syncPeriod: 0s
              ipvs:
                minSyncPeriod: 0s
                syncPeriod: 0s
                tcpFinTimeout: 0s
                tcpTimeout: 0s
                udpTimeout: 0s
              metricsBindAddress: 0.0.0.0:10249
              mode: iptables
              nftables:
                minSyncPeriod: 0s
                syncPeriod: 0s
            kuberouter:
              autoMTU: true
              hairpin: Enabled
              metricsPort: 8080
            nodeLocalLoadBalancing:
              enabled: false
              envoyProxy:
                apiServerBindPort: 7443
                konnectivityServerBindPort: 7132
              type: EnvoyProxy
            podCIDR: 10.244.0.0/16
            provider: kuberouter
            serviceCIDR: 10.96.0.0/12
          scheduler: {}
          storage:
            etcd:
              peerAddress: 192.168.1.21
            type: etcd
          telemetry:
            enabled: true
  4. Find the ports that need to be open
    grep -i port /etc/k0s/k0s.yaml
  5. Open each port listed in the above file. The default are used below:
    firewall-cmd --add-port=9443/tcp --permanent
    firewall-cmd --add-port=6443/tcp --permanent
    firewall-cmd --add-port=7443/tcp --permanent
    firewall-cmd --add-port=8133/tcp --permanent
    firewall-cmd --add-port=8132/tcp --permanent
    firewall-cmd --add-port=7132/tcp --permanent
    firewall-cmd --add-port=8080/tcp --permanent
    firewall-cmd --reload
  6. Install the controller components
    k0s install controller -c /etc/k0s/k0s.yaml
  7. Start the k0s service
    k0s start
  8. Create a worker node token
    k0s token create --role=worker --expiry=100h > token-file
  9. Transfer the new token file to each worker node
    scp token-file user@<worker>:~/

Setup worker node(s) - Worker nodes only

  1. Login as root as required by the commands we're using
    sudo su -
  2. Move the token file from ther user's home directory to root's home directory
    mv /home/<user>/token-file ~/
  3. Open each port listed in the above file. The default are used below:
    firewall-cmd --add-port=9443/tcp --permanent
    firewall-cmd --add-port=6443/tcp --permanent
    firewall-cmd --add-port=7443/tcp --permanent
    firewall-cmd --add-port=8133/tcp --permanent
    firewall-cmd --add-port=8132/tcp --permanent
    firewall-cmd --add-port=7132/tcp --permanent
    firewall-cmd --add-port=8080/tcp --permanent
    firewall-cmd --reload
  4. Join the worker node(s) to the cluster
    k0s install worker --token-file ./token-file
  5. Start the worker process
    k0s start

Verify the cluster setup/deployment - Control node

  1. Watch the cluster deployment as they start
    watch -n1 'k0s kubectl get all -A; echo; k0s kubectl get node'

Add additional controller nodes to the cluster

Note that you should always have an odd number of controll nodes. Therefore, you should have 1 or 3 or 5 control nodes, depending on the cluster size. For a non-production environment, start wtih 1. For production environments, start with 3. Monitor the kube-apiserver performance. If it starts to show hi CPU utilization, increase the number of CPUs on the controll node(s).

  1. On the existing controller node
    1. Create the controller node token
      k0s token create --role=controller --expiry=1h > token-file
    2. Transfer the new token file to each worker node
      scp token-file user@<new-controller>:~/
    3. Transfer the k0s.yaml config file to the new controller node(s)
      scp k0s.yaml user@<new-controller>:~/

Check the status of the cluster - Control node output

  1. Check the status of the cluster
    k0s status
    
    Version: v1.32.4+k0s.0
    Process ID: 109946
    Role: controller
    Workloads: false
    SingleNode: false

Check the status of the cluster - Worker node output

  1. Check the status of the cluster
    k0s status
    
    Version: v1.32.4+k0s.0
    Process ID: 91481
    Role: worker
    Workloads: true
    SingleNode: false
    Kube-api probing successful: true
    Kube-api probing last error:  

User setup process

Pod network setup process

Highly Available ( multi-node control plane ) cluster setup process

Add worker nodes to the cluster setup

Verify worker nodes have successfully joined the cluster

  1. Run the following command:
    kubectl get nodes

It should look something like this

[garfield@k8s01 ~]$ kubectl get nodes
NAME                       STATUS   ROLES           AGE   VERSION
k8s01.home.mygarfield.us   Ready    control-plane   17h   v1.31.1
k8s02.home.mygarfield.us   Ready    <none>          10m   v1.31.1
k8s03.home.mygarfield.us   Ready    <none>          6s    v1.31.1