[K8S&K3S]Multi-node deployment for K8S
Multi-node deployment for K8S
The content of this experiment is the multi-node K8S deployment experiment under the ubuntu18.04 2 core 4G memory environment
I'm just going to use three servers as one master node and two slave nodes.
Environment Introduction
why use ubuntu ?
centos7 ubuntu18,.04 and win10 ,They are popular operating systems that support kbueadm
Our next experiment is a test under the same environment as K3S
K3s is not perfect for Centos and does not support Win10
so we use ubuntu18.04 to test.
why use 2core and 4G memory Run the configure ?
Because this is the limit min configure of one single k8s requirements.
But the k3s is requeirement 1core and 512MB to run.
Environment construction
I use vmvare in a Windows10 environment
Because the virtual machine environment for configuration upgrade cloning and server rollback is very convenient compared to the physical computer is more suitable for rapid experiments.
Of course, if you have the ability or try to choose a physical machine for the experiment
The experimental steps
- Start experimenting from remote SSH entry to the server
- Close disk Swap to improve performance
swapoff - a
- Configure the network accordingly
modprobe br_netfilter
echo 1 > /proc/sys/net/ipv4/ip_forward
- install docker
- Due to the special network environment in China added by Aliyun, China's accelerated foreign users can not join -mirror Aliyun environment
curl -fssl https://get.docker.com | bash -s docker --mirror aliyun
It is recommended that the virtual machine after docker installation should be used for snapshot storage in subsequent experiments. We can quickly clone the current environment
4. Because this is a multi-node installation, we set the name of each server to make it easier to use
hostnamectl set-hostname master
Modify the hosts
192.168.110.11 192.168.110.22 192.168.110.33 It's the server address I prepared
[root@master ~]# cat >> /etc/hosts << EOF
192.168.110.11 master
192.168.110.22 node2
192.168.110.33 node3
EOF
Because I use the clone virtual machine environment
mac addresses need to be different
After searching for information, I found such a solution
apply only to VMware
5. Modify the K8S kernel parameters
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
- Modify cgroupdriver to eliminate alarms:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
more /etc/docker/daemon.json
{
"registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
Reload takes effect
systemctl daemon-reload && systemctl restart docker
7. Set the kubernetes source(Configure China mirror warehouse)
cat <<EOF > /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- Finally, we're in master's installation
install kubelet,kubeadm,kubectl
apt-get install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2
- kubelet runs on all nodes of the cluster and is used to start tools for objects such as PODS and containers
- kubeadm is used to initialize the cluster and start the command tool for the cluster
- kubectl is a command line used to communicate with clusters, through which applications can be deployed and managed, various resources can be viewed, various components can be created, deleted and updated
Configuration after installation and automatic startup after boot
systemctl enable kubelet && systemctl start kubelet
- k8s docker image download
Almost all the installation components and Docker images of Kubernetes are placed on Google's own website. Direct access may cause network problems. Here, the solution is to download the image from The Image warehouse of Alicloud, pull it to the local and change it back to the default image tag.Execute the script on the node
more image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.14.2
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
- init master node with kuberadm
kubeadm init --apiserver-advertise-address 192.168.110.11 --pod-network-cidr=10.244.0.0/16
- apiserver-advertise-address Specify the interface of the master
- pod-network-cidr Specify the scope of the Pod network
It will output token information at the terminal after successful initialization which is very important to save
more join_cluster
kubeadm join 192.168.110.11:6443 --token z562f4.rs3yvzplnh3o80zn \
--discovery-token-ca-cert-hash sha256:1661c4532d4054a3f312a8f1e2232ef0b19033482aa52ca4e97f574b139e1cf7
Token was also covered in the previous K3S deployment blog
It shows that the principle of their connection is similar
- Configure environment variables
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
- Install the pod network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
By default, the cluster does not schedule pods on the master
This can be done
But not doing it does not affect the use of multiple nodes
kubectl taint nodes master node-role.kubernetes.io/master-
node/master untainted
By now the master node is installed but cannot be used without node
- The node deployment
Repeat the first 9 steps- Install Kubelet, Kubeadm, and Kubectl
- Download mirror
We'll use the text we saved on the Master node later
kubeadm join 192.168.110.11:6443 --token z562f4.rs3yvzplnh3o80zn \
--discovery-token-ca-cert-hash sha256:1661c4532d4054a3f312a8f1e2232ef0b19033482aa52ca4e97f574b139e1cf7
It only takes one line of command to add node
The experimental conclusion
You have the previous single-node deployment exercise
Multi-node deployment is really just a step or two apart
Use Token to join the cluster
K3s and K8S are the same in this respect
Since I first learned k8S, I encountered many problems in the installation process
The biggest problem has been the difficulty of downloading foreign images from China
It's a network problem and it's not really my problem
K3s officially provides a one-click deployment script
And kindly provided the Chinese version
Deployment is very easy for beginners
The K8S official documentation is also comprehensive
But I didn't find anything like the k3S one-click install script
It adds to my study burden for beginners
I didn't know the vocabulary of pod Node until a month ago
So I don't know how to solve the problem
But I settled down to understand this knowledge first
K3s was then deployed with one click and some experimentation
I have some understanding of these terms
I learned a lot by going back and installing k8s
Through the previous series of experiments
I think K3S is more convenient to study
Because I have several cloud servers that students use
None of them meet the hardware requirements of the K8S
But k3S edge oriented computations my server can run directly
So just for the sake of my own learning I would recommend k3S more