Overview
This guide will teach you how to deploy a minimum viable Kubernetes Cluster on CentOS 7 by using kubeadm tool. Kubeadm is a command line tool created to help users bootstrap a Kubernetes cluster that conforms to best practices. This tool supports cluster lifecycle management functions such as bootstrap tokens and cluster upgrades.
For Debian installation: Deploy Kubernetes Cluster on Debian 10 with Kubespray
For Rocky Linux 8: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O
The next sections will discuss in detail the process of deploying a minimal Kubernetes cluster on CentOS 7 servers. This installation is for a single control-plane cluster. We have other guides on deployment of highly available Kubernetes cluster with RKE and Kubespray.
Step 1: Prepare Kubernetes Servers
The minimal server requirements for the servers used in the cluster are:
- 2 GiB or more of RAM per machine–any less leaves little room for your apps.
- At least 2 CPUs on the machine that you use as a control-plane node.
- Full network connectivity among all machines in the cluster – Can be private or public
Since this setup is meant for development purposes, I have server with below details
Server Type |
Server Hostname |
Specs |
Master |
k8s-master01.computingforgeeks.com |
4GB Ram, 2vcpus |
Worker |
k8s-worker01.computingforgeeks.com |
4GB Ram, 2vcpus |
Worker |
k8s-worker02.computingforgeeks.com |
4GB Ram, 2vcpus |
Login to all servers and update the OS.
1
|
sudo yum -y update && sudo systemctl reboot
|
Step 2: Install kubelet, kubeadm and kubectl
Once the servers are rebooted, add Kubernetes repository for CentOS 7 to all the servers.
1
2
3
4
5
6
7
8
9
|
sudo tee /etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
|
Then install required packages.
1
2
|
sudo yum clean all && sudo yum -y makecache
sudo yum -y install epel-release vim git curl wget kubelet kubeadm kubectl --disableexcludes=kubernetes
|
Confirm installation by checking the version of kubeadm and kubectl.
1
2
3
4
5
6
|
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:56:50Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.1", GitCommit:"8f94681cd294aa8cfd3407b8191f6c70214973a4", GitTreeState:"clean", BuildDate:"2023-01-18T15:58:16Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.7
|
Step 3: Disable SELinux and Swap
If you have SELinux in enforcing mode, turn it off or use Permissive mode.
1
2
|
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
|
Turn off swap.
1
2
|
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
|
Configure sysctl.
1
2
3
4
5
6
7
8
9
10
|
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
|
Step 4: Install Container runtime
To run containers in Pods, Kubernetes uses a container runtime. Supported container runtimes are:
NOTE: You have to choose one runtime at a time.
Using CRI-O Container Runtime
For CRI-O below are the installation steps.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
# Ensure you load modules
sudo modprobe overlay
sudo modprobe br_netfilter
# Set up required sysctl params
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload sysctl
sudo sysctl --system
# Add CRI-O repo
OS=CentOS_7
VERSION=1.26
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
# Install CRI-O
sudo yum remove docker-ce docker-ce-cli containerd.io
sudo yum install cri-o
# Update CRI-O Subnet
sudo sed -i 's/10.85.0.0/192.168.0.0/g' /etc/cni/net.d/100-crio-bridge.conf
sudo sed -i 's/10.85.0.0/192.168.0.0/g' /etc/cni/net.d/100-crio-bridge.conflist
# Start and enable Service
sudo systemctl daemon-reload
sudo systemctl start crio
sudo systemctl enable crio
|
Using Docker Container runtime
When using Docker container engine run the commands below to install it:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
|
# Install packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo yum install docker-ce docker-ce-cli containerd.io
# Create required directories
sudo mkdir /etc/docker
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
# Start and enable Services
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
|
Using Containerd runtime
Below are the installation steps for Containerd.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
|
# Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# Load at runtime
sudo modprobe overlay
sudo modprobe br_netfilter
# Ensure sysctl params are set
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# Reload configs
sudo sysctl --system
# Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Add Docker repo
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# Install containerd
sudo yum update -y && yum install -y containerd.io
# Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
# restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
|
To use the systemd cgroup driver, set plugins.cri.systemd_cgroup = true in /etc/containerd/config.toml
. When using kubeadm, manually configure the cgroup driver for kubelet
I recommend you disable firewalld on your nodes:
1
|
sudo systemctl disable --now firewalld
|
If you have an active firewalld service there are a number of ports to be enabled.
Master Server ports:
1
2
3
|
sudo firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload
|
Worker Node ports:
1
2
3
|
sudo firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload
|
Step 6: Initialize your control-plane node
Login to the server to be used as master and make sure that the br_netfilter module is loaded:
1
2
3
|
$ lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 2 br_netfilter,ebtable_broute
|
Enable kubelet service.
1
|
sudo systemctl enable kubelet
|
We now want to initialize the machine that will run the control plane components which includes etcd (the cluster database) and the API Server.
Pull container images:
1
2
3
4
5
6
7
8
|
$ sudo kubeadm config images pull
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.1
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.1
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.1
[config/images] Pulled registry.k8s.io/kube-proxy:v1.26.1
[config/images] Pulled registry.k8s.io/pause:3.9
[config/images] Pulled registry.k8s.io/etcd:3.5.6-0
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3
|
These are the basic kubeadm init
options that are used to bootstrap cluster.
--control-plane-endpoint : set the shared endpoint for all control-plane nodes. Can be DNS/IP
--pod-network-cidr : Used to set a Pod network add-on CIDR
--cri-socket : Use if have more than one container runtime to set runtime socket path
--apiserver-advertise-address : Set advertise address for this particular control-plane node's API server
Set cluster endpoint DNS name or add record to /etc/hosts file. In this example 172.29.20.5 is the Control Plane IP address.
1
2
|
$ sudo vim /etc/hosts
172.29.20.5 k8sapi.computingforgeeks.com
|
Create cluster:
1
2
3
4
|
sudo kubeadm init \
--pod-network-cidr=192.168.0.0/16 \
--upload-certs \
--control-plane-endpoint=k8sapi.computingforgeeks.com
|
Note: If 192.168.0.0/16 is already in use within your network you must select a different pod network CIDR, replacing 192.168.0.0/16 in the above command.
Container runtime sockets:
Runtime |
Path to Unix domain socket |
Docker |
/var/run/docker.sock |
containerd |
/run/containerd/containerd.sock |
CRI-O |
/var/run/crio/crio.sock |
You can optionally pass Socket file for runtime and advertise address depending on your setup.
1
2
3
4
5
|
sudo kubeadm init \
--pod-network-cidr=192.168.0.0/16 \
--cri-socket /var/run/crio/crio.sock \
--upload-certs \
--control-plane-endpoint=k8sapi.computingforgeeks.com
|
Configure kubectl using commands in the output:
1
2
3
|
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
|
Check cluster status:
1
2
3
4
5
|
$ kubectl cluster-info
Kubernetes master is running at https://k8sapi.computingforgeeks.com:6443
KubeDNS is running at https://k8sapi.computingforgeeks.com:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
|
Additional Master nodes can be added using the command in installation output:
1
2
3
4
|
kubeadm join k8sapi.computingforgeeks.com:6443 \
--token zoy8cq.6v349sx9ass8dzyj \
--discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24 \
--control-plane
|
Step 7: Install network plugin
In this guide we’ll use Calico. You can choose any other supported network plugins.
Get latest release:
1
|
VER=$(curl -s https://api.github.com/repos/projectcalico/calico/releases/latest|grep tag_name|cut -d '"' -f 4)
|
Download and install the latest stable release of Tigera Calico operator:
1
2
|
wget https://raw.githubusercontent.com/projectcalico/calico/${VER}/manifests/tigera-operator.yaml
kubectl create -f tigera-operator.yaml
|
Next we install custom resource definitions.
1
2
|
wget https://raw.githubusercontent.com/projectcalico/calico/${VER}/manifests/custom-resources.yaml
kubectl create -f custom-resources.yaml
|
Operator installation output:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
|
Custom resource installation output:
1
2
|
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
|
Confirm that all of the pods are running:
1
2
3
4
5
6
|
$ kubectl get pods -n calico-system -w
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6b7b9c649d-b5vt5 1/1 Running 0 3m51s
calico-node-4n299 1/1 Running 0 3m51s
calico-typha-69789694cb-2zw4b 1/1 Running 0 3m52s
csi-node-driver-8z6cv 2/2 Running 0 3m51s
|
Confirm master node is ready:
1
2
3
|
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rocky8.mylab.io Ready control-plane 7m58s v1.26.1 65.21.188.149 <none> Rocky Linux 8.7 (Green Obsidian) 4.18.0-425.3.1.el8.x86_64 cri-o://1.26.1
|
Step 8: Add worker nodes
With the control plane ready you can add worker nodes to the cluster for running scheduled workloads.
If endpoint address is not in DNS, add record to /etc/hosts.
1
2
|
$ sudo vim /etc/hosts
172.29.20.5 k8sapi.computingforgeeks.com
|
The join command that was given is used to add a worker node to the cluster.
1
2
3
|
kubeadm join k8sapi.computingforgeeks.com:6443 \
--token zoy8cq.6v349sx9ass8dzyj \
--discovery-token-ca-cert-hash sha256:14a6e33ca8dc9998f984150bc8780ddf0c3ff9cf6a3848f49825e53ef1374e24
|
Output:
1
2
3
4
5
6
7
8
9
10
11
|
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.26" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
|
Run below command on the control-plane to see if the node joined the cluster.
1
2
3
4
|
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.computingforgeeks.com Ready master 18m v1.26.1
k8s-worker01.computingforgeeks.com Ready <none> 98s v1.26.1
|
If the join token is expired, refer to our guide on how to join worker nodes.
Step 9: Deploy application on cluster
For single node cluster check out our guide on how to run pods on control plane nodes:
We need to validate that our cluster is working by deploying an application.
1
|
kubectl apply -f https://k8s.io/examples/pods/commands.yaml
|
Check to see if pod started
1
2
3
|
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
command-demo 0/1 Completed 0 40s
|
Step 10: Install Kubernetes Dashboard (Optional)
Kubernetes dashboard can be used to deploy containerized applications to a Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources.
Refer to our guide for installation: How To Install Kubernetes Dashboard with NodePort
Step 11: Install an Ingress Controller
If you need an Ingress controller for Kubernetes workloads, you can use our guide in the following link for the installation process:
Check out our guides in the following links.
Similar Kubernetes deployment guides:
Original article link