In the realm of managing applications, Kubernetes reigns supreme. It’s like having a conductor for your software orchestra, making sure everything plays in harmony. This blog will guide you through setting up a Multi-Master Kubernetes cluster with External Etcd using Kubeadm.
Environment
1. The HAProxy Load Balancer: At IP 20.20.20.100, Haproxy maintains high availability in the control plane
2. Kubernetes Master Nodes:
-
20.20.20.11
-
20.20.20.12
-
20.20.20.13
3. Kubernetes Worker Nodes:
-
20.20.20.21
-
20.20.20.22
-
20.20.20.23
4. Etcd Nodes: Here we will put etcd on the same node as the master
Setup Hosts
1. Exec on all nodes
vim /etc/hosts
20.20.20.100 k8s-lb
20.20.20.11 k8s-master-1
20.20.20.12 k8s-master-2
20.20.20.13 k8s-master-3
20.20.20.21 k8s-worker-1
20.20.20.22 k8s-worker-2
20.20.20.23 k8s-worker-3
Installing the HAProxy load balancer
Exec on lb node.
1. Update apt repository.
apt update && apt upgrade -y
2. Install haproxy.
sudo apt-get install haproxy
3. Configure HAProxy to load balance the traffic between the three Kubernetes master nodes.
vim /etc/haproxy/haproxy.cfg
global
...
default
...
frontend kubernetes
bind 20.20.20.100:6443
option tcplog
mode tcp
default_backend kubernetes-master-nodes
backend kubernetes-master-nodes
mode tcp
balance roundrobin
option tcp-check
server k8s-master-1 20.20.20.11:6443 check fall 3 rise 2
server k8s-master-2 20.20.20.12:6443 check fall 3 rise 2
server k8s-master-3 20.20.20.13:6443 check fall 3 rise 2
4. Restart haproxy.
sudo systemctl restart haproxy
Generating TLS Certificate for Etcd Cluster
Exec on master-1
1. Download the cffsl binaries.
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
2. Add the execution permission to the binaries
chmod +x cfssl*
3. Move binary to /usr/local/bin
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
4. Verify
cfssl version
5. Create the certificate authority configuration file
vim ca-config.json
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
6. Create the certificate authority signing request configuration file.
vim ca-csr.json
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "CA",
"ST": "Cork Co."
}
]
}
7. Generate the certificate authority certificate and private key.
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
8. Verify that the ca-key.pem and the ca.pem were generated.
ls -la
9. Create the certificate signing request configuration file.
vim kubernetes-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "IE",
"L": "Cork",
"O": "Kubernetes",
"OU": "Kubernetes",
"ST": "Cork Co."
}
]
}
10. Generate the certificate and private key.
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=20.20.20.11,20.20.20.12,20.20.20.13,20.20.20.100,127.0.0.1,kubernetes.default \
-profile=kubernetes kubernetes-csr.json | \
cfssljson -bare kubernetes
11. Verify that the kubernetes-key.pem and the kubernetes.pem file were generated.
ls -la
Installing Docker (actually we only use the containerd)
Exec on all master and worker nodes
1. Install docker
sudo -s
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh
2. Enable CRI plugin
sed -i 's/^disabled_plugins = \["cri"\]/# &/' /etc/containerd/config.toml
systemctl restart containerd
Installing kubeadm, kublet, and kubectl
Exec on all master and worker nodes
1. Add the Google repository key.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
2. Add the Google repository.
echo 'deb http://apt.kubernetes.io kubernetes-xenial main' > /etc/apt/sources.list.d/kubernetes.list
3. Update the list of packages and install kubelet, kubeadm and kubectl.
apt update
apt-get install kubelet kubeadm kubectl -y
4. Disable the swap.
swapoff -a
sed -i '/ swap / s/^/#/' /etc/fstab
Installing and configuring Etcd (External etcd)
Exec on all master nodes
1. Create a configuration directory for Etcd.
sudo mkdir /etc/etcd /var/lib/etcd
2. Change etcd dir permission.
chmod 700 /var/lib/etcd
2. Copy cert form master-1.
scp -T k8s-master-1:~/ca.pem ~/kubernetes.pem ~/kubernetes-key.pem" /etc/etcd
4. Download etcd binaries.
wget https://github.com/etcd-io/etcd/releases/download/v3.4.26/etcd-v3.4.26-linux-amd64.tar.gz
5. Extract.
tar xvzf etcd-v3.4.26-linux-amd64.tar.gz
6. Move binaries to bin path.
sudo mv etcd-v3.4.26-linux-amd64/etcd* /usr/local/bin/
7. Create an etcd systemd unit file. (adjust the ip).
$ vim /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos
[Service]
ExecStart=/usr/local/bin/etcd \
--name 20.20.20.11 \
--cert-file=/etc/etcd/kubernetes.pem \
--key-file=/etc/etcd/kubernetes-key.pem \
--peer-cert-file=/etc/etcd/kubernetes.pem \
--peer-key-file=/etc/etcd/kubernetes-key.pem \
--trusted-ca-file=/etc/etcd/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ca.pem \
--peer-client-cert-auth \
--client-cert-auth \
--initial-advertise-peer-urls https://20.20.20.11:2380 \
--listen-peer-urls https://20.20.20.11:2380 \
--listen-client-urls https://20.20.20.11:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://20.20.20.11:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster 20.20.20.11=https://20.20.20.11:2380,20.20.20.12=https://20.20.20.12:2380,20.20.20.13=https://20.20.20.13:2380 \
--initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
7. Enable and start etcd in turn.
sudo systemctl daemon-reload
sudo systemctl enable --now etcd
8. Verify.
ETCDCTL_API=3 etcdctl member list
Initialize K8S Cluster
Exec on master-1
1. Create the configuration file for kubeadm.
vim config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "20.20.20.100:6443"
etcd:
external:
endpoints:
- https://20.20.20.11:2379
- https://20.20.20.12:2379
- https://20.20.20.13:2379
caFile: /etc/etcd/ca.pem
certFile: /etc/etcd/kubernetes.pem
keyFile: /etc/etcd/kubernetes-key.pem
networking:
podSubnet: 10.30.0.0/24
2. Initialize k8s cluster.
sudo kubeadm init --config=config.yaml --upload-certs | tee -a k8s-init.log
note: copy kubeadm join …
Join Another Nodes
# paste that has been copied before
# For master or control-plane
kubeadm join 20.20.20.100:6443 --token ll35cx.zot5tprhmmio37ix \
--discovery-token-ca-cert-hash sha256:6452be374fa227444c63818229b58690d2d8099e98f75d96b45426556398c5d4 \
--control-plane --certificate-key af4a1a50c6c283c9eb1f77835479eef0ae8a69144ad20e7e908e87bd333c53b5
# For worker
kubeadm join 20.20.20.100:6443 --token ll35cx.zot5tprhmmio37ix \
--discovery-token-ca-cert-hash sha256:6452be374fa227444c63818229b58690d2d8099e98f75d96b45426556398c5d4
Set kubeconfig and verify nodes
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
Deploying the overlay network
1. Deploy the overlay network pods.
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
2. Check that the pods are deployed properly.
kubectl get pods -n kube-system
3. Check that the nodes are in Ready state.
kubectl get nodes