kubeadm搭建k8s集群

一、初始化k8s实验环境

K8s集群角色 IP地址 主机名 安装的组件
控制节点 192.168.88.180 k8s-master1 kube-apiserver、kube-controller-manager、kube-scheduler、docker、etcd、calico
工作节点 192.168.88.181 k8s-node1 kubelet、kube-proxy、docker、calico、coreDNS
工作节点 192.168.88.182 k8s-node2 kubelet、kube-proxy、docker、calico、coreDNS

1、kubeadm和二进制安装k8s适用场景分析

kubeadm是官方提供的开源工具,用于快速搭建kubernetes集群,目前比较方便和推荐使用的。

kubeadmi int 以及 kubeadm join 这两个命令可以快速创建 kubernetes集群。kubeadm 初始化 k8s,所有的组件都是以pod形式运行的,具备故障自恢复能力。

kubeadm 是工具,可以快速搭建集群,也就是相当于程序脚本自动部署,简化操作。证书、组件资源清单文件都是自动创建的,屏蔽了很多细节,使得对各个模块感知很少。

kubeadm适合经常部署k8s集群的场景,或对自动化要求比较高的场景。

二进制:在官网下载相关组件的二进制包,如果手动安装,对k8s理解更加清楚,对集群组件的安装部署更加了解。

kubeadm和二进制都适合生产环境,在生产环境运行很稳定,具体根据实际项目评估。

2、创建虚拟机,修改IP

如下所示修改网卡信息,下面是控制节点的配置信息,工作节点仅IP地址不同。

bash 复制代码
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=92c69021-2981-4c15-8c9a-2d1efb6bb011
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.88.180
NETMASK=255.255.255.0
GATEWAY=192.168.88.2
DNS=114.114.114.114

# 重启网络
[root@localhost ~]# systemctl restart network

3、配置主机名和hosts文件

配置机器主机名:

bash 复制代码
# 在192.168.88.180上执行
[root@localhost ~]# hostnamectl set-hostname k8s-master1 && bash
# 在192.168.88.181上执行
[root@localhost ~]# hostnamectl set-hostname k8s-node1 && bash
# 在192.168.88.182上执行
[root@localhost ~]# hostnamectl set-hostname k8s-node2 && bash

配置主机hosts文件,相互间通过主机名访问:

bash 复制代码
echo '''192.168.88.180 k8s-master1 
192.168.88.181 k8s-node1
192.168.88.182 k8s-node2''' >> /etc/hosts

4、配置主机间免密登录

bash 复制代码
# 1、三台机器都生成ssh密钥,一路回车
[root@k8s-master1 ~]# ssh-keygen
[root@k8s-node1 ~]# ssh-keygen
[root@k8s-node2 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:56kFtc+z1a7/cWTg9Hlh7xLcf8PHLh/LSQF113kK8os root@k8s-master1
The keys randomart image is:
+---[RSA 2048]----+
|               .=|
|          . . ..=|
|          .o oo+.|
|         . ..++++|
|        S o. .+oB|
|         +E+. .B+|
|          + + o*O|
|         o   +++X|
|        .   . .OB|
+----[SHA256]-----+

# 三台机器都执行将公钥拷贝到其他机器
[root@k8s-master1 ~]# ssh-copy-id k8s-master1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-master1 (192.168.88.180)' can't be established.
ECDSA key fingerprint is SHA256:u7nlxwc8AwCbT5fFs2kWLcQQJpZ4cTlLBhl8qWRZtis.
ECDSA key fingerprint is MD5:80:39:be:e6:78:40:82:6b:30:de:69:74:64:11:a0:7d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master1'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master1 ~]# ssh-copy-id k8s-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node1 (192.168.88.181)' can't be established.
ECDSA key fingerprint is SHA256:u7nlxwc8AwCbT5fFs2kWLcQQJpZ4cTlLBhl8qWRZtis.
ECDSA key fingerprint is MD5:80:39:be:e6:78:40:82:6b:30:de:69:74:64:11:a0:7d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.

[root@k8s-master1 ~]# ssh-copy-id k8s-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node2 (192.168.88.182)' can't be established.
ECDSA key fingerprint is SHA256:u7nlxwc8AwCbT5fFs2kWLcQQJpZ4cTlLBhl8qWRZtis.
ECDSA key fingerprint is MD5:80:39:be:e6:78:40:82:6b:30:de:69:74:64:11:a0:7d.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.

5、关闭交换分区

SWAP是交换分区,如果机器内存不够,会使用swaq分区,但是swap分区的性能较低,k8s设计时为了提升性能,默认不允许使用交换分区。

kubeadm初始化的时候会监测swap是否关闭,若没有关闭,则初始化失败。如不想关闭交换分区,安装k8s的时候可以指定------------ignore-preflight-error=Swap 来解决。

bash 复制代码
# 临时关闭交换分区
[root@k8s-master1 ~]# swapoff -a
[root@k8s-node1 ~]# swapoff -a
[root@k8s-node2 ~]# swapoff -a
[root@k8s-master1 ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3934         151        3621          11         160        3556
Swap:             0           0           0

# 永久关闭交换分区
# 编辑/etc/fstab文件,注释掉swap分区的行
[root@k8s-master1 ~]# vim /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=1f767b34-97e1-4fd0-a40e-f078e9535bfa /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 如果是克隆的机器需要删除uuid
[root@k8s-node1 ~]# vim /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# 如果是克隆的机器需要删除uuid
[root@k8s-node2 ~]# vim /etc/fstab
/dev/mapper/centos-root /                       xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

6、配置内核参数

bash 复制代码
[root@k8s-master1 ~]# modprobe br_netfilter
[root@k8s-master1 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@k8s-master1 ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@k8s-master1 ~]# sysctl -p /etc/sysctl.d/k8s.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
(1)sysctl命令

在运行时配置内核参数

-p 从指定的文件加载系统参数,如不指定即从 /etc/sysctl.conf 文件中加载。

(2)modprobe br_netfilter

br_netfilter模块是docker网络插件的依赖模块,如果不加载,docker网络插件将无法工作。

(3)net.bridge.bridge-nf-call-iptables

在centos安装docker,执行 docker info 时,会提示如下警告:

bash 复制代码
Warning: bridge-nf-call-iptables is disabled
Warning: bridge-nf-call-ip6tables is disabled

警告提示表明,需要设置net.bridge.bridge-nf-call-iptables和net.bridge.bridge-nf-call-ip6tables参数为1,才能使docker网络插件正常工作。

(4)net.ipv4.ip_forward

默认情况下,内核不支持IP转发,需要设置net.ipv4.ip_forward为1,才能使docker网络插件正常工作。

如没配置,kubeadm初始化时,会提示如下报错:

bash 复制代码
ERROR FileContent--proc-sys-net-ipv4-ip_forward: /proc/sys/net/ipv4/ip_forward: contents are not set to 1

出于安全考虑,Linux 系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的 ip 地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。

要让 Linux 系统具有路由转发功能,需要配置一个 Linux 的内核参数 net.ipv4.ip_forward。这个参数指定了 Linux 系统当前对路由转发功能的支持情况;其值为 0 时表示禁止进行 IP 转发;如果是 1,则说明 IP 转发功能已经打开。

7、关闭防火墙和selinux

bash 复制代码
# 关闭防火墙
[root@k8s-master1 ~]# systemctl stop firewalld; systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@k8s-node1 ~]# systemctl stop firewalld; systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

[root@k8s-node2 ~]# systemctl stop firewalld; systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

# 关闭selinux
[root@k8s-master1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@k8s-node1 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

[root@k8s-node2 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 修改完selinux配置文件后,需要重启系统才生效
# 显示Disabled 说明已经关闭
[root@k8s-master1 ~]# getenforce 
Disabled

8、配置repo源

bash 复制代码
# 备份基础源
[root@k8s-master1 ~]# mkdir /etc/yum.repos.d/repo.bak
[root@k8s-master1 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/

[root@k8s-node1 ~]# mkdir /etc/yum.repos.d/repo.bak
[root@k8s-node1 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/

[root@k8s-node2 ~]#  mkdir /etc/yum.repos.d/repo.bak
[root@k8s-node2 ~]# mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/repo.bak/

# 添加阿里云源
[root@k8s-master1 ~]# curl -o /etc/yum.repos.d/CenOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2523  100  2523    0     0  34062      0 --:--:-- --:--:-- --:--:-- 34561
[root@k8s-master1 ~]# scp /etc/yum.repos.d/CenOS-Base.repo k8s-node1:/etc/yum.repos.d/
CenOS-Base.repo                                                         100% 2523     5.7MB/s   00:00    
[root@k8s-master1 ~]# scp /etc/yum.repos.d/CenOS-Base.repo k8s-node2:/etc/yum.repos.d/
CenOS-Base.repo

# 配置国内docker源
[root@k8s-master1 ~]# yum install yum-utils -y
[root@k8s-master1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Loaded plugins: fastestmirror
adding repo from: http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
grabbing file http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
[root@k8s-node1 ~]# yum install yum-utils -y
[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-node2 ~]# yum install yum-utils -y
[root@k8s-node2 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 配置epel源
[root@k8s-master1 ~]# vi /etc/yum.repos.d/epel.repo 
[epel]
name=Extra Packages for Enterprise Linux 7 - $basearch
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

[epel-debuginfo]
name=Extra Packages for Enterprise Linux 7 - $basearch - Debug
#baseurl=http://download.fedoraproject.org/pub/epel/7/$basearch/debug
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-debug-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[epel-source]
name=Extra Packages for Enterprise Linux 7 - $basearch - Source
#baseurl=http://download.fedoraproject.org/pub/epel/7/SRPMS
metalink=https://mirrors.fedoraproject.org/metalink?repo=epel-source-7&arch=$basearch&infra=$infra&content=$contentdir
failovermethod=priority
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
gpgcheck=1

[root@k8s-master1 ~]# scp /etc/yum.repos.d/epel.repo k8s-node1:/etc/yum.repos.d/
epel.repo                                                               100% 1050     2.0MB/s   00:00    
[root@k8s-master1 ~]# scp /etc/yum.repos.d/epel.repo k8s-node2:/etc/yum.repos.d/
epel.repo                                                               100% 1050     2.1MB/s   00:00 


# 配置安装kubernetes组件需要的阿里云源
[root@k8s-master1 ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0

[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
kubernetes.repo                                                         100%  129   177.7KB/s   00:00    
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node2:/etc/yum.repos.d/
kubernetes.repo                                                         100%  129    22.0KB/s   00:00

9、配置时区和时间同步

bash 复制代码
# 1.配置主节点为NTP时间服务器
[root@k8s-master1 ~]# yum install chrony -y
[root@k8s-master1 ~]# systemctl start chronyd && systemctl enable chronyd
[root@k8s-master1 ~]# vi /etc/chrony.conf
###### 注释默认同步服务器,添加阿里时间同步 #########
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst

###### 配置允许同网段主机使用本机的NTP服务  ###########
# Allow NTP client access from local network.
allow 192.168.10.0/24


# 2.配置计算节点时间同步
[root@k8s-node1 ~]# yum install chrony -y
[root@k8s-node2 ~]# yum install chrony -y

[root@k8s-node1 ~]# systemctl start chronyd && systemctl enable chronyd
[root@k8s-node2 ~]# systemctl start chronyd && systemctl enable chronyd

[root@k8s-node1 ~]# vi /etc/chrony.conf
[root@k8s-node2 ~]# vi /etc/chrony.conf 
###### 注释默认同步服务器,添加本地同步 #########
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server k8s-master1 iburst

# 3.重启时间同步服务让配置生效
[root@k8s-master1 ~]# systemctl restart chronyd
[root@k8s-node1 ~]# systemctl restart chronyd
[root@k8s-node2 ~]# systemctl restart chronyd

# 4.检查时间同步状态
[root@k8s-node1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* k8s-master1                   3   6    17    34  +4386ns[  +40us] +/-   26ms

[root@k8s-master1 ~]# date
Tue Dec 24 22:32:16 CST 2024
[root@k8s-node1 ~]# date
Tue Dec 24 22:32:03 CST 2024
[root@k8s-node2 ~]# date
Tue Dec 24 22:32:39 CST 2024

10、开启ipvs

ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的 4 层 LAN 交换,作为 Linux 内核的一部分。ipvs 运行在主机上,在真实服务器集群前充当负载均衡器。ipvs 可以将基于 TCP 和 UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。

bash 复制代码
[root@k8s-master1 ~]# vi /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
 /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
 if [ 0 -eq 0 ]; then
 /sbin/modprobe ${kernel_module}
 fi
done

[root@k8s-master1 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141432  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133053  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

[root@k8s-master1 ~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node1:/etc/sysconfig/modules/
ipvs.modules                                                            100%  320   148.7KB/s   00:00    
[root@k8s-master1 ~]# scp /etc/sysconfig/modules/ipvs.modules k8s-node2:/etc/sysconfig/modules/
ipvs.modules                                                            100%  320   433.1KB/s   00:00

[root@k8s-node1 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141432  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133053  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
[root@k8s-node2 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26787  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_sh               12688  0 
ip_vs_dh               12688  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_wlc              12519  0 
ip_vs_lc               12516  0 
ip_vs                 141432  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          133053  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
(1)ipvs 和 iptable 对比分析:

kube-proxy 支持 iptables 和 ipvs 两种模式, 在 kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于 netfilter的,但是 ipvs 采用的是 hash 表,因此当 service 数量达到一定规模时,hash 查表的速度优势就会显现出来,从而提高 service 的服务性能。

那么 ipvs 模式和 iptables 模式之间有哪些差异呢?

  1. ipvs 为大型集群提供了更好的可扩展性和性能
  2. ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)
  3. ipvs 支持服务器健康检查和连接重试等功能

11、安装基础软件包

bash 复制代码
[root@k8s-master1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gccgcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-develpython-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm
[root@k8s-node2 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gccgcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconfautomake zlib-develpython-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet ipvsadm

如果 firewalld 不习惯,可以安装 iptables。

bash 复制代码
# 安装
yum install -y iptables-services
# 关闭
service iptables stop && systemctl disable iptables
# 清空默认规则
iptables -F

二、安装 docker 服务

1、安装docker-ce

bash 复制代码
[root@k8s-master1 ~]# yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io
[root@k8s-master1 ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@k8s-node1 ~]# yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io
[root@k8s-node1 ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

[root@k8s-node2 ~]# yum install -y docker-ce-20.10.6 docker-ce-cli-20.10.6 containerd.io
[root@k8s-node2 ~]# systemctl start docker && systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

2、配置加速器

bash 复制代码
[root@k8s-master1 ~]# vi /etc/docker/daemon.json
{
 "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.dockercn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hubmirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
 "exec-opts": ["native.cgroupdriver=systemd"]
}
# 修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可以
[root@k8s-master1 ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2024-12-24 23:15:48 CST; 5ms ago

[root@k8s-master1 ~]# scp /etc/docker/daemon.json k8s-node1:/etc/docker/
daemon.json                                                             100%  320    55.8KB/s   00:00    
[root@k8s-master1 ~]# scp /etc/docker/daemon.json k8s-node2:/etc/docker/
daemon.json                                                             100%  320   646.2KB/s   00:00 
[root@k8s-node1 ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
[root@k8s-node2 ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker

三、安装 k8s

1、安装初始化k8s需要的软件包

Kubeadm: kubeadm 是一个工具,用来初始化 k8s 集群的。

kubelet: 安装在集群所有节点上,用于启动 Pod 的。

kubectl: 通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

bash 复制代码
[root@k8s-master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6

[root@k8s-master1 ~]# systemctl enable kubelet && systemctl status kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)

[root@k8s-node1 ~]# systemctl enable kubelet && systemctl status kubelet
[root@k8s-node2 ~]# systemctl enable kubelet && systemctl status kubelet

2、初始化 k8s 集群

把初始化 k8s 集群需要的离线镜像包上传到 master1、node1、node2 机器上,手动解压:

bash 复制代码
# 上传 离线镜像包
[root@k8s-master1 ~]# yum install -y lrzsz
[root@k8s-master1 ~]# rz

[root@k8s-master1 ~]# du -sh *
4.0K	anaconda-ks.cfg
1.1G	k8simage-1-20-6.tar.gz

[root@k8s-master1 ~]# scp k8simage-1-20-6.tar.gz k8s-node1:/root
k8simage-1-20-6.tar.gz                                                  100% 1033MB  77.4MB/s   00:13    
[root@k8s-master1 ~]# scp k8simage-1-20-6.tar.gz k8s-node2:/root
k8simage-1-20-6.tar.gz                                                  100% 1033MB  79.3MB/s   00:13 

# 加载 镜像包
[root@k8s-master1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@k8s-node1 ~]# docker load -i k8simage-1-20-6.tar.gz
[root@k8s-node2 ~]# docker load -i k8simage-1-20-6.tar.gz
# 查看镜像
[root@k8s-master1 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED       SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.20.6    9a1ebfd8124d   3 years ago   118MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.20.6    560dd11d4550   3 years ago   116MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.20.6    b93ab2ec4475   3 years ago   47.3MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.20.6    b05d611c1af9   3 years ago   122MB
calico/pod2daemon-flexvol                                         v3.18.0    2a22066e9588   3 years ago   21.7MB
calico/node                                                       v3.18.0    5a7c4970fbc2   3 years ago   172MB
calico/cni                                                        v3.18.0    727de170e4ce   3 years ago   131MB
calico/kube-controllers                                           v3.18.0    9a154323fbf7   3 years ago   53.4MB
registry.aliyuncs.com/google_containers/etcd                      3.4.13-0   0369cf4303ff   4 years ago   253MB
registry.aliyuncs.com/google_containers/coredns                   1.7.0      bfe3a36ebd25   4 years ago   45.2MB
registry.aliyuncs.com/google_containers/pause                     3.2        80d28bedfe5d   4 years ago   683kB

# 初始化 k8s 集群
# 注:--image-repository registry.aliyuncs.com/google_containers:手动指定仓库地址为registry.aliyuncs.com/google_containers。kubeadm 默认从 k8s.grc.io 拉取镜像,但是 k8s.gcr.io访问不到,所以需要指定从 registry.aliyuncs.com/google_containers 仓库拉取镜像。
[root@k8s-master1 ~]# kubeadm init --kubernetes-version v1.20.6 --apiserver-advertise-address=192.168.88.180 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification

# 显示如下信息,表示初始化成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.88.180:6443 --token ka1nw2.belxjk1hqbycje2r \
    --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71


# 上面这个命令是把 node 节点加入集群,需要保存下来,每个人的都不一样

配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理。

bash 复制代码
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   2m51s   v1.20.6 

# 此时集群状态还是 NotReady 状态,因为没有安装网络插件。

3、扩容集群

(1)添加第一个工作节点
bash 复制代码
# 查看加入节点的命令
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.88.180:6443 --token dgzhsb.ngeaupe77fd1lr2y     --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71

# 将node1加入集群:
[root@k8s-node1 ~]# kubeadm join 192.168.88.180:6443 --token dgzhsb.ngeaupe77fd1lr2y     --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71  --ignore-preflight-errors=SystemVerification

# 查看集群状态
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   3m48s   v1.20.6
k8s-node1     NotReady   <none>                 14s     v1.20.6
(2)添加第二个工作节点
bash 复制代码
# 将node2加入集群:
[root@k8s-node2 ~]# kubeadm join 192.168.88.180:6443 --token dgzhsb.ngeaupe77fd1lr2y     --discovery-token-ca-cert-hash sha256:47b8eb2f92736a73f4358e6e7f286c88f80db91fcae29e81c6055e1fc0b37c71  --ignore-preflight-errors=SystemVerification

# 查看集群状态
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE    VERSION
k8s-master1   NotReady   control-plane,master   5m1s   v1.20.6
k8s-node1     NotReady   <none>                 87s    v1.20.6
k8s-node2     NotReady   <none>                 12s    v1.20.6

看到上面说明 node1、node2 节点已经加入到集群了,充当工作节点。

(3)将工作节点ROLES设置为worker
bash 复制代码
[root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl label node k8s-node2 node-role.kubernetes.io/worker=worker
node/k8s-node2 labeled
[root@k8s-master1 ~]# kubectl get node
NAME          STATUS     ROLES                  AGE     VERSION
k8s-master1   NotReady   control-plane,master   8m20s   v1.20.6
k8s-node1     NotReady   worker                 4m46s   v1.20.6
k8s-node2     NotReady   worker                 3m31s   v1.20.6

注意:上面状态都是 NotReady 状态,说明没有安装网络插件

相关推荐
寂夜了无痕4 小时前
彻底解决 k8s xxx 命名空间卡在 Terminating 的问题
kubernetes·k8s命令空间卡住·命名空间卡在 termin
木二8 小时前
附035.Kubernetes_v1.25.3高可用部署架构二
云原生·kubernetes
明明跟你说过9 小时前
在【k8s】中部署Jenkins的实践指南
运维·ci/cd·云原生·容器·kubernetes·jenkins
酥暮沐9 小时前
K8S 集群搭建——cri-dockerd版
linux·容器·kubernetes
a_j589 小时前
k8s面试题总结(十)
云原生·容器·kubernetes
沉默的八哥9 小时前
RBAC的工作原理,以及如何限制特定用户访问
运维·kubernetes
一条闲鱼_mytube9 小时前
[Kubernetes] 7控制平面组件
java·平面·kubernetes
对许11 小时前
FusionInsight MRS云原生数据湖
云原生·fusioninsight·mrs
桂月二二12 小时前
基于WebAssembly的云原生运行时:重新定义轻量化微服务架构
云原生·架构·wasm
阿里云云原生13 小时前
深度测评国产 AI 程序员,在 QwQ 和满血版 DeepSeek 助力下,哪些能力让你眼前一亮?
云原生