Day 20 — Mastering Multi-Cluster Kubernetes with HAProxy

Day 20 — Mastering Multi-Cluster Kubernetes with HAProxy

Introduction

Welcome to Day 20 of our exciting journey into the world of Kubernetes! Today, we’re diving deep into the realm of multi-cluster Kubernetes, a topic that’s gaining immense popularity for its ability to distribute workloads across different clusters. We’ll also explore the role of HAProxy, a powerful load balancer, in orchestrating our multi-cluster setup. So, let’s strap in and embark on this enlightening adventure.

Why Multi Cluster

Multi-cluster Kubernetes setups are beneficial for various reasons, including high availability, disaster recovery, and geographical distribution. Having multiple clusters can ensure that if one cluster fails, your application remains available in other clusters. It also helps distribute workloads geographically, improving latency for users in different regions.

HAproxy

HAProxy is used as a load balancer to distribute traffic across multiple Kubernetes clusters. It plays a crucial role in maintaining high availability by redirecting traffic to available clusters. In the provided setup, it acts as an entry point, routing requests to the appropriate Kubernetes cluster.

I have added the details for all Five servers. So, you will be able to understand the high overview of all the servers

We have to set up five servers where two will be Master Nodes, the other two will be Worker Nodes, and the last one HAproxy.

Create HAproxy Server(EC2 Instance)

Click on Launch instances

Enter the name of the instance and select Ubuntu 22.04(must).

The instance type will be t2.micro and click on Create new key pair for this demo.

Enter the name, keep the things as it is, and click on Create key pair.

Select the default vpc and select the subnet from the availability zone us-east-1a and create the new security group with All Traffic type where the Source type will be Anywhere.

Here we have configured everything for HAproxy, So click on Launch Instance.

Creating Master Nodes(EC2 Instance)

Here, we have to set up the two Master Nodes.

Enter the name of the instance and select Ubuntu 22.04(must) and in the right number of instances will be 2. So that, we will save us some time.

The master node needs 2CPU that will get in the t2.medium instance type.

Provide the same key that we have provided for the HAproxy server.

Select the same VPC and same Subnet that we have provided for the HAproxy server.

Select the same Security Group that we have created for the HAproxy server.

Creating Worker Nodes(EC2 Instance)

Here, we have to set up the two Worker Nodes.

Enter the name of the instance and select Ubuntu 22.04(must) and in the right number of instances will be 2. So that, we will save us some time.

The master node doesn’t need 2CPU that’s why the instance type will be the same t2.micro

Provide the same key that we have provided for the HAproxy server.

Select the same VPC and same Subnet that we have provided for the HAproxy server.

Select the same Security Group that we have created for the HAproxy server.

Now, both the master and worker nodes' names will be the same. So, you can modify the name of each by masternode1 and masternode2, the same for the worker node.

This is the total of five servers that we have created.

Now, we have to do the configurations in all the servers. Let’s do this and start with the HAproxy server.

On HAproxy Server

Before doing SSH, modify the permission of the PEM file that we will use to do SSH.

sudo su
chmod 400 Demo-MultiCluster.pem

Now, use the command to SSH into the HAproxy server.

To become a root user run the below command

sudo su

Now, update the package and install haproxy which will help us to set our Kubernetes multi-cluster

apt update && apt install -y haproxy

Here, we have to set the backend and frontend to set up Kubernetes Multi-Cluster.

Open the file haproxy.cfg and add the code snippets according to your Private IPs

vim /etc/haproxy/haproxy.cfg

Remember, in frontend block HAproxy Private IP needs to be present.

In the backend block, Both Master Node IP needs to be present.

frontend kubernetes-frontend
bind 172.31.22.132:6443
mode tcp
option tcplog
default_backend kubernetes-backend

backend kubernetes-backend
mode tcp
option tcp-check
balance roundrobin
server kmaster1 172.31.23.243:6443 check fall 3 rise 2
server kmaster2 172.31.28.74:6443 check fall 3 rise 2

Once you add the frontend and backend, restart the haproxy service

systemctl restart haproxy

Now, check the status of whether the haproxy service is running or not

systemctl status haproxy

If you look at some bottom lines, the kmaster1 and kmaster2 nodes are down which is correct. But this indicates that the frontend and backend code are reflected.

Now, add the hostnames in the /etc/hosts files with all five servers' Private IPs like below

vim /etc/hosts

172.31.23.243 k8master1.node.com node.com k8master1
172.31.28.74 k8master2.node.com node.com k8master2
172.31.31.111 k8worker1.node.com node.com k8worker1
172.31.22.133 k8worker2.node.com node.com k8worker2
172.31.22.132 lb.node.com node.com lb

Now, try to ping all four servers(Master+Worker) from HAproxy. If your machine is receiving the packets then we are good to go for the next step which is configuring the Master Nodes

On Master Nodes

I have provided you the snippets of One Master Nodes only. But I configured both Master nodes. So, make sure to configure each and everything simultaneously on both Master Nodes.

Login to your both Master Nodes

Once you log into both machines, run the command that is necessary for both Master Nodes

Now, add the hostnames in the /etc/hosts files with all five servers' Private IPs like below

vim /etc/hosts

172.31.23.243 k8master1.node.com node.com k8master1
172.31.28.74 k8master2.node.com node.com k8master2
172.31.31.111 k8worker1.node.com node.com k8worker1
172.31.22.133 k8worker2.node.com node.com k8worker2
172.31.22.132 lb.node.com node.com lb

After closing the host's file, run the below commands.

sudo su
ufw disable
reboot

Now, log in again to your both machines after 2 to 3 minutes.

Run the below commands

sudo su
swapoff -a; sed -i '/swap/d' /etc/fstab

Run the below commands

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl - system

Install some dependencies packages and add the Kubernetes package

sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg - dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

As we have added the gpg keys, we need to run the update command

apt update

Now, we have to install docker on our both master nodes

apt install docker.io -y

Do some configurations for containerd service

mkdir /etc/containerd
sh -c "containerd config default > /etc/containerd/config.toml"
sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
systemctl restart containerd.service

Now, we will install our kubelet, kubeadm, and kubectl services on the Master node

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Now, restart the kubelet service, and don’t forget to enable the kubelet service. So that, if any master node will reboot then we don’t need to start the kubelet service.

sudo systemctl restart kubelet.service
sudo systemctl enable kubelet.service

Only on Master Node1

This command must need to run on Master Node1

We have to init the kubeadm and provide the endpoint which will be the haproxy server Private IP and in the end provide the Master Node1 IP only.

kubeadm init — control-plane-endpoint=”<hap-private-ip:6443" — upload-certs — apiserver-advertise-address=

kubeadm init — control-plane-endpoint=”172.31.22.132:6443" — upload-certs — apiserver-advertise-address=172.31.28.74

Once you run the above command, scroll down.

Once you scroll down, you will see Kubernetes control plan has initialized successfully which means Master node1 joined the HAproxy server. Now, we have to initialize Kubernetes Master Node2 as well. Follow the below steps:

  • Copy the Red highlighted commands, Blue highlighted commands and Green highlighted commands and paste them into your notepad.
  • Now, run the Red highlighted commands on the Master node1 itself.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

On Master Node2

Now, we need to do one more thing so that Master Node2 joins the HAproxy server as well. Follow the steps:

  • We have to use the Blue highlighted command, but we need to add one more thing with the command, refer to the below(add — apiserver-advertise-address=

kubeadm join 172.31.22.132:6443 - token 0vzbaf.slplmyokc1lqland \

  • discovery-token-ca-cert-hash sha256:75c9d830b358fd6d372e03af0e7965036bce657901757e8b0b789a2e82475223 \
  • control-plane - certificate-key 0a5bec27de3f27d623c6104a5e46a38484128cfabb57dbd506227037be6377b4 - apiserver-advertise-address=172.31.28.74

Once you followed the above steps, you can run the below commands on Master Node2

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now, if you run the command ‘kubectl get nodes’ on Master Node1 to see the nodes. You will get both nodes but they are not in ready status because we did not configure the network. We will configure that once the Worker Nodes are configured.

Note: Copy the Green highlighted command, which we will use to connect with Worker Nodes

Now, if you run the command ‘kubectl get nodes’ on Master Node2 to see the nodes. You will get both nodes but they are not in ready status because we did not configure the network. We will configure that once the Worker Nodes are configured.

Note: Copy the Green highlighted command, which we will use to connect with Worker Nodes

On Both Worker Nodes

Now, Let’s configure our Worker Nodes.

I have provided you the snippets of One Master Nodes only. But I configured both Worker nodes. So, make sure to configure each and everything simultaneously on both Worker Nodes.

Login to your both Worker Nodes

Once you log into your both machines, run the command that is necessary for both Master Nodes

Now, add the hostnames in the /etc/hosts files with all five servers' Private IPs like below

vim /etc/hosts

172.31.23.243 k8master1.node.com node.com k8master1
172.31.28.74 k8master2.node.com node.com k8master2
172.31.31.111 k8worker1.node.com node.com k8worker1
172.31.22.133 k8worker2.node.com node.com k8worker2
172.31.22.132 lb.node.com node.com lb

After closing the host's file, run the below commands.

sudo su
ufw disable
reboot

Now, log in again to both machines after 2 to 3 minutes.

Run the below commands

sudo su
swapoff -a; sed -i '/swap/d' /etc/fstab

Run the below commands:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
# Apply sysctl params without reboot
sudo sysctl - system

Install some dependencies packages and add the Kubernetes package

sudo apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg - dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] apt.kubernetes.io kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

As we have added the gpg keys, we need to run the update command

apt update

Now, we have to install docker on our both worker nodes

apt install docker.io -y

Do some configurations for containerd service

mkdir /etc/containerd
sh -c "containerd config default > /etc/containerd/config.toml"
sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
systemctl restart containerd.service

Now, we will install our kubelet, kubeadm, and kubectl services on the Worker node

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Now, restart the kubelet service, and don’t forget to enable the kubelet service. So that, if any worker node will reboot then we don’t need to start the kubelet service.

sudo systemctl restart kubelet.service
sudo systemctl enable kubelet.service

If you remember, I told you to copy the Green highlighted command.
Paste that command, on both Worker Node1 and Worker Node2.

Once you do that, you will see the output like the below snippet.

Run on any Master Node

Let’s validate whether both Worker Nodes are joined in the Kubernetes Cluster by running the below command.

kubectl get nodes

If you can see four servers then, Congratulations you did 99% work.

As you know, our all nodes are not in ready status because of network components.

Run the below command to add the Calico networking components in the Kubernetes Cluster.

kubectl apply -f raw.githubusercontent.com/projectcalico/cal..

After 2 to 3 minutes, if you run the command ‘kubectl get nodes’. You will see that all nodes are getting ready.

Let’s deploy the Nginx Container on Worker Node1 and the Apache Container on Worker Node2

To achieve this, you have to perform the commands on Master Nodes only.

Add a label on both worker nodes

For WorkerNode1

kubectl label nodes mynode=node1

For WorkerNode2

kubectl label nodes mynode=node2

You can also validate whether the labels are added to both Worker Nodes or not by running the below command

kubectl get nodes — show-labels

Let’s create two Container on both different Worker nodes from different Master nodes

I am creating an Nginx Container on Worker Node1 from Master node1

Here is the deployment YML file

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
mynode: node1 # This deploys the container on node1
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80

Here is the service YML file

apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer # Use LoadBalancer for external access

Apply both files by the below commands

kubectl apply -f deployment.yml
kubectl apply -f service.yml

Validate whether the deployment is complete or not by running the below commands

kubectl get deploy
kubectl get pods
kubectl get svc

Now, to check whether the application is running or not from outside of the cluster. Copy the worker node1 public ip and then use the port that is showing when you run the ‘kubectl get svc’ command that i have highlighted in the snippet.

Here, you can see our nginx container is perfectly running outside of the Cluster.

The second Container from the Second Master Node on the Second Worker Node

I am creating Apache Container on Worker Node2 from Master node2

Here is the deployment YML file

apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-deployment
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
nodeSelector:
mynode: node2 # This deploys the container on node1
containers:
- name: apache
image: httpd:latest
ports:
- containerPort: 80

Here is the service YML file

apiVersion: v1
kind: Service
metadata:
name: apache-service
spec:
selector:
app: apache
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer # Use LoadBalancer for external access

Apply both files by the below commands

kubectl apply -f deployment.yml
kubectl apply -f service.yml

Validate whether the deployment is complete or not by running the below commands

kubectl get deploy
kubectl get pods
kubectl get svc

Now, to check whether the application is running or not from outside of the cluster. Copy the worker node2 public IP and then use the port that is showing when you run the ‘kubectl get svc’ command corresponding to port 80.

Here, you can see our Apache container is perfectly running outside of the Cluster.

Conclusion

In conclusion, Day 20 has taken us on an incredible journey through the intricacies of multi-cluster Kubernetes with HAProxy. We’ve set up a multi-cluster environment, configured HAProxy for load balancing, and even deployed applications across different clusters. This newfound knowledge is invaluable, as it equips us to build scalable, resilient, and highly available Kubernetes setups.

As we continue to explore the vast landscape of DevOps, Kubernetes, and more, remember that learning is a journey. Each day brings us closer to mastering the art of container orchestration and the world of cloud-native technologies. So, keep the curiosity alive, stay eager to learn, and watch as your expertise in these domains continues to grow.

Stay tuned for Day 21, where we’ll delve even deeper into the fascinating world of DevOps. Until then, keep practicing, keep exploring, and keep embracing the wonderful world of Kubernetes.

Want to Know About Challenge?

If you’re eager to learn more and join our challenge through the GitHub Repository, stay tuned for the upcoming posts. Follow for more exciting insights into the world of Kubernetes!

GitHub Repository: https://github.com/AmanPathak-DevOps/30DaysOfKubernetes

#Kubernetes #ReplicationController #ReplicaSet #ContainerOrchestration #DevOps #K8sLearning

See you on Day 21 as we unravel more Kubernetes mysteries!

Stay connected on LinkedIn: LinkedIn Profile

Stay up-to-date with GitHub: GitHub Profile

Feel free to reach out to me, if you have any other queries.

Happy Learning