Reference: Bootstrapping clusters with kubeadm
sudo dnf install nc jq socat iproute-tc -y
sudo firewall-cmd --add-port=6443/tcp --permanent && sudo firewall-cmd --reload && sudo firewall-cmd --list-all sudo firewall-cmd --add-port=10250/tcp --permanent && sudo firewall-cmd --reload && sudo firewall-cmd --list-all
sudo swapoff -a
sudo vim /etc/fstab # /dev/mapper/<hostname>-swap none swap defaults 0 0
cat << EOF | sudo tee /etc/sysctl.d/k8s.conf net.ipv4.ip_forward = 1 EOF sudo sysctl --system
sudo setenforce 0 sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo modprobe br_netfilter
[kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.31/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
[cri-o] name=CRI-O baseurl=https://pkgs.k8s.io/addons:/cri-o:/stable:/v1.31/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/addons:/cri-o:/stable:/v1.31/rpm/repodata/repomd.xml.key
sudo dnf install -y cri-o container-selinux kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now crio.service
sudo systemctl enable --now kubelet.service
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=<dns.name>
!!!! DO NOT COPY THIS COMMAND FROM HERE !!!! ============================================ kubeadm join 192.168.1.17:6443 --token 8gf1ah.7boas234f8a663gas \ --discovery-token-ca-cert-hash sha256:44f76a2d10922b7ac980faebcd42ae75f061b6cf4c5ccacef8937d0f064c
You should now have a 1-box k8s cluster as a control plane node. By copying the above command, you prepared for adding more nodes to your cluster.
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm init generates another kubeconfig file super-admin.conf that contains a certificate with Subject: O = system:masters, CN = kubernetes-super-admin. system:masters is a break-glass, super user group that bypasses the authorization layer (for example RBAC). Do not share the super-admin.conf file with anyone. It is recommended to move the file to a safe location.
See Generating kubeconfig files for additional users on how to use kubeadm kubeconfig user to generate kubeconfig files for additional users.
curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.2/manifests/canal.yaml -O
kubectl apply -f canal.yaml
If you want a highly avaiable cluster, meaning you want multiple control plane nodes, then you must create additional servers and join them to the cluster as control plane nodes. The kubeadm command has special paramenters to designate the new node as such.
If you want only 1 control plane node, but still want multiple worker nodes, skip to the next section.
--control-plane-endpoint=<dns.name> to your kubeadm init command, run the following command: !!!! DO NOT COPY THIS COMMAND FROM HERE !!!! ============================================ kubeadm join k8s01.home.mygarfield.us:6443 --token 1vk4n8.bgys7j3f2348cad42y4v \ --discovery-token-ca-cert-hash sha256:ac167419d422b46ad7182349fda14f72e7b7745fc009f2dd2db97b7c6 \ --control-plane
kubectl get nodes
In most cases, you want to have multiple worker nodes. This is where your containers/applications will run.
kubectl get nodes
It should look something like this
[garfield@k8s01 ~]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01.home.mygarfield.us Ready control-plane 17h v1.31.1 k8s02.home.mygarfield.us Ready <none> 10m v1.31.1 k8s03.home.mygarfield.us Ready <none> 6s v1.31.1
Helm is a package manager for Kubernetes. A helm chart is a collection of text files that describe how to install an application.
wget https://github.com/helm/helm/releases
tar zxf helm-*.tar.gz
ls linux-amd64/ helm LICENSE README.md
sudo cp linux-amd64/helm /usr/local/bin/helm
helm version