[K8S&K3S]Multi-node deployment for K3S
Multi-node deployment for K3S
The content of this experiment is the multi-node K3S deployment experiment under the ubuntu18.04 2 core 4G memory environment
I'm just going to use three servers as one master node and two slave nodes.
Environment Introduction
why use ubuntu ?
centos7 ubuntu18,.04 and win10 ,They are popular operating systems that support kbueadm
Our next experiment is a test under the same environment as K3S
K3s is not perfect for Centos and does not support Win10
so we use ubuntu18.04 to test.
why use 2core and 4G memory Run the configure ?
Because this is the limit min configure of one single k8s requirements.
But the k3s is requeirement 1core and 512MB to run.
Environment construction
I use vmvare in a Windows10 environment
Because the virtual machine environment for configuration upgrade cloning and server rollback is very convenient compared to the physical computer is more suitable for rapid experiments.
Of course, if you have the ability or try to choose a physical machine for the experiment
The experimental steps
- Start experimenting from remote SSH entry to the server
- install docker
- Due to the special network environment in China added by Aliyun, China's accelerated foreign users can not join -mirror Aliyun environment
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
It is recommended that the virtual machine after docker installation should be used for snapshot storage in subsequent experiments. We can quickly clone the current environment
2. Use the official script for the installation
curl -sfL https://get.k3s.io | sh -
K3s provides customized installation scripts for special environments for Chinese users
curl -sfL https://docs.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn sh -
- K3s USES Containerd as the container by default.
For comparison with K8S, we used Docker
After installation, we need to adjust K3S 'service profile to switch K3S' default container engine from Containerd to Docker.
vim /etc/systemd/system/multi-user.target.wants/k3s.service
Here, we need to modify the value of ExecStart and change it to:
/usr/local/bin/k3s server --docker --no-deploy traefik
- Save the exit and execute the command to reload the new service configuration file
systemctl daemon-reload
Restart k3s
service k3s restart
- Verify that the K3S cluster is ready
k3s kubectl get node
if it is success we will see ready
NAME STATUS ROLES AGE VERSION
k3s01.ilemonrain.io Ready <none> 3m34s v1.14.1-k3s.4
Then, K3S cluster startup is successful.
- K3s can directly use the kubectl command in K8S
kubectl get node
NAME STATUS ROLES AGE VERSION
k3s01.ilemonrain.io Ready <none> 3m34s v1.14.1-k3s.4
- Repeat the steps above to deploy three servers
For the next two deployed servers, we treat it as a worker node
Stop the current K3S process with the command
service k3s stop
- The first node is the master node.
Execute the following command to get Token for registering cluster members
cat /var/lib/rancher/k3s/server/node-token
This string helps us to connect working nodes quickly
We also need to know the IP address of the master node
If you are installing on a physical machine you can get the IP address directly by doing so
curl ip.sb
But I'm using a virtual machine so I need to get the Intranet IP
ifconfig -a
- The Worker node registers the cluster
Run this command for registration on the node where we just shut down k3S service
k3s agent --server https://[master IP]:6443 --token [Token]
- IP replacement [Master IP] obtained in the previous step
- Token replacement [Token] obtained in the previous step
- Let's go to the master node and see what happens
k3s kubectl get nodes
The output is as follows
NAME STATUS ROLES AGE VERSION
server01 Ready master 18m v1.14.5-k3s.1
server02 Ready worker 60s v1.14.5-k3s.1
server03 Ready worker 60s v1.14.5-k3s.1
- The worker node is not fully installed. We still need to make the final modifications
The following operations are performed on the Worker node
Ctrl+C terminates the process
vim /etc/systemd/system/k3s.service
The default file is as follows
[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
After=network-online.target
[Service]
Type=notify
EnvironmentFile=/etc/systemd/system/k3s.service.env
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/k3s \
server \
KillMode=process
Delegate=yes
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
[Install]
WantedBy=multi-user.target
Let's modify this row
ExecStart=/usr/local/bin/k3s \
server \
This is the same substitution rule as in the previous step
ExecStart=/usr/local/bin/k3s agent --server https://[Master IP] :6443 --token [Token]
Reload the configuration file after exit
systemctl daemon-reload
Restart K3S
service k3s restart
Now the operation of the worker node has been completed
We check in the master node
k3s kubectl get nodes
The experimental conclusion
Multi-node deployment of K3S is quite simple compared to single-node deployment
Just two more parts of the binding Token and changing roles
For me, it's easy and quick to do
The multi-node deployment of k8S is described later
You actually use token for binding but with a few different Settings
The only problem I found was that I didn't find anything like the official k3S one-click install script
I think that's what makes the K8S Deployment more complicated