User Tools

Site Tools


projects:k8s:k8s_setup_with_k0s_k0sctl
Home | clubs :: cloud club :: python_club :: 3D-Printing | projects :: Proxmox | Kubernetes | scripting | utilities | games

What is Kubernetes ( k8s )?

k0sctl-logo.jpg

Preperation process - All nodes

Reference: k0sctl installation

  1. Required prep - Ensure unique system ID. K0s cluster deployment will fail if aren't.
    sudo systemd-machine-id-setup
  2. System prep ( RPM-based distros )
    1. Download the binary:

Initialize cluster - Control nodes only

  1. Login as root as required by the commands we're using
    sudo su -
  2. Create the directory
    mkdir -p /etc/k0s
  3. Copy or generate the configuration file
    1. If you already have a configuration file, copy it into the current directory
    2. If you don't already have a k0sctl.yaml ( configuration ) file, create a default configuration file
      k0s config create > /etc/k0s/k0sctl.yaml
      1. The conent will look like the following:
        apiVersion: k0sctl.k0sproject.io/v1beta1
        kind: Cluster
        metadata:
          name: k0s-cluster
          user: admin
        spec:
          hosts:
          - ssh:
              address: 10.0.0.1  # Update this entry to specify hostname or IP address
              user: root
              port: 22
              keyPath: null      # Update this entry to specify the key path
            role: controller
          - ssh:
              address: 10.0.0.2  # Update this entry to specify hostname or IP address
              user: root
              port: 22
              keyPath: null      # Update this entry to specify the key path
            role: worker
          options:
            wait:
              enabled: true
            drain:
              enabled: true
              gracePeriod: 2m0s
              timeout: 5m0s
              force: true
              ignoreDaemonSets: true
              deleteEmptyDirData: true
              podSelector: ""
              skipWaitForDeleteTimeout: 0s
            concurrency:
              limit: 30
              uploads: 5
            evictTaint:
              enabled: false
              taint: k0sctl.k0sproject.io/evict=true
              effect: NoExecute
              controllerWorkers: false
  4. Create the cluster
    k0sctl apply --config /etc/k0s/k0sctl.yaml
  5. If you want to use kubectl, lens, or other tools outside the cluster nodes, then you will need a “kubeconfig” file. You can generate the file with the following command
    k0s kubeconfig > ~/.kubeconfig
  6. Don't forget to set the proper permission. This file contains credentials ( certs ) that allow access without entering any information
    chmod 600 ~/.kubeconfig
  7. Find the ports that need to be open
    grep -i port /etc/k0s/k0s.yaml
  8. Open the required firewall ports:
    firewall-cmd --permanent --add-port={8133/tcp,2379/tcp,2380/tcp,10257/tcp,10259/tcp,9443/tcp,8132/tcp,6443/tcp,10249/tcp}
    firewall-cmd --reload

Verify the cluster setup/deployment - Control node

  1. Watch the cluster deployment as they start ( to quit, press CTRL+C )
    watch -n1 'k0s kubectl get all -A; echo; k0s kubectl get node'

Add additional controller nodes to the cluster

Note that you should always have an odd number of controll nodes. Therefore, you should have 1 or 3 or 5 control nodes, depending on the cluster size. For a non-production environment, start wtih 1. For production environments, start with 3. Monitor the kube-apiserver performance. If it starts to show hi CPU utilization, increase the number of CPUs on the controll node(s).

  1. On the existing controller node
    1. Create the controller node token
      k0s token create --role=controller --expiry=1h > token-file
    2. Transfer the new token file to each worker node
      scp token-file user@<new-controller>:~/
    3. Transfer the k0s.yaml config file to the new controller node(s)
      scp k0s.yaml user@<new-controller>:~/

Check the status of the cluster - Control node output

  1. Check the status of the cluster
    k0s status
    
    Version: v1.32.4+k0s.0
    Process ID: 109946
    Role: controller
    Workloads: false
    SingleNode: false

Check the status of the cluster - Worker node output

  1. Check the status of the cluster
    k0s status
    
    Version: v1.32.4+k0s.0
    Process ID: 91481
    Role: worker
    Workloads: true
    SingleNode: false
    Kube-api probing successful: true
    Kube-api probing last error:  

User setup process

Pod network setup process

Highly Available ( multi-node control plane ) cluster setup process

Add worker nodes to the cluster setup

Verify worker nodes have successfully joined the cluster

  1. Run the following command:
    kubectl get nodes

It should look something like this

[garfield@k8s01 ~]$ kubectl get nodes
NAME                       STATUS   ROLES           AGE   VERSION
k8s01.home.mygarfield.us   Ready    control-plane   17h   v1.31.1
k8s02.home.mygarfield.us   Ready    <none>          10m   v1.31.1
k8s03.home.mygarfield.us   Ready    <none>          6s    v1.31.1
projects/k8s/k8s_setup_with_k0s_k0sctl.txt · Last modified: by 127.0.0.1