官方文档:
Installing kubeadm | Kubernetes
切记要关闭防⽕墙、selinux、禁用交换空间,
cpu核⼼数⾄少为2
内存4G
kubeadm部署k8s⾼可用集群的官方文档:
Creating Highly Available Clusters with kubeadm | Kubernetes
你需要在每台机器上安装以下的软件包:
-
kubeadm
:用来初始化集群的指令。(每台机器)-
master 用来初始化集群
-
node节点用来加入集群
-
-
kubelet
:在集群中的每个节点上用来启动 Pod 和容器等。(每个node) -
kubectl
:用来与集群通信的命令行工具。(master)
一、完整安装过程
准备三台机器
192.168.199.200 k8s-master
192.168.199.201 k8s-node1
192.168.199.202 k8s-node2
制作本地解析,修改主机名。相互解析
vim /etc/hosts
192.168.199.200 k8s-master
192.168.199.201 k8s-node1
192.168.199.202 k8s-node2
所有机器系统配置
1.关闭防⽕墙:
systemctl stop firewalld
systemctl disable firewalld
2.禁用SELinux:
setenforce 0
3.编辑文件/etc/selinux/config,将SELINUX修改为disabled
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
SELINUX=disabled
4、注释掉swap分区
sed -i 's/.*swap.*/#&/' /etc/fstab
二、使用kubeadm部署Kubernetes
1、在所有节点安装kubeadm和kubelet、kubectl
首先配置yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
提醒:如果下载失败,关闭认证,再次下载
-
kubelet
: Kubernetes 的节点代理,负责在每个节点上运行,并确保容器运行在 Pod 中。 -
kubeadm
: 是一个命令行工具,用于初始化和管理 Kubernetes 集群。 -
kubectl
: 是 Kubernetes 的命令行工具,用于与 Kubernetes 集群进行交互。 -
ipvsadm
: 是用于设置、维护和检查 Linux 内核中 IP 虚拟服务器(IPVS)表的用户空间实用程序,IPVS 可以实现 Kubernetes 中的服务负载均衡
2、安装kubelet、kubeadm、kubectl、ipvsadm组件
yum install -y kubelet-1.19.1-0.x86_64 kubeadm-1.19.1-0.x86_64 kubectl-1.19.1-0.x86_64 ipvsadm
加载ipvs相关内核模块
modprobe ip_vs && modprobe ip_vs_rr && modprobe ip_vs_wrr && modprobe ip_vs_sh && modprobe nf_conntrack_ipv4
3、编辑文件添加开机启动
vim /etc/rc.local
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack_ipv4
4、给rc.local添加可执行权限
chmod +x /etc/rc.local
5、配置转发相关参数,否则可能会出错
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF
使配置生效
sysctl --system
6、如果net.bridge.bridge-nf-call-iptables报错,加载br_netfilter模块
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
7、查看是否加载成功
lsmod | grep ip_vs
8、配置kubelet的cgroups
cat >/etc/sysconfig/kubelet<<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs --pod-infra-container-image=k8s.gcr.io/pause:3.2"
EOF
9、启动kubelet
systemctl daemon-reload
systemctl enable kubelet && systemctl restart kubelet
systemctl status kubelet
你会发现报错误信息:错误是正常现象,因为api-server还没有在master节点上初始化启动
三、master节点初始化
kubeadm init --kubernetes-version=v1.19.1 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.199.200 --ignore-preflight-errors=Swap
注释:
apiserver-advertise-address=192.168.229.11 master的ip地址。
--kubernetes-version=v1.19.1 --更具具体版本进行修改
--pod-network-cidr=10.244.0.0/16 我们自定义pod给容器内指定的网段
--ignore-preflight-errors=Swap 忽略swap分区错误(我们已经关闭了swap,此处有没有都可以)
注意在检查⼀下swap分区是否关闭
1、Mater重新完成初始化
2、配置使用网络插件
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
k8s-app: flannel
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: flannel
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
k8s-app: flannel
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"EnableNFTables": false,
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
- key: node.kubernetes.io/not-ready
operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
image: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
image: docker.io/flannel/flannel:v0.25.6
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: docker.io/flannel/flannel:v0.25.6
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --iface=ens33
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
3、创建flannel网络
kubectl apply -f kube-flannelv1.19.1.yaml
查看哪一个pod被分配到哪一个节点
[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
获取节点:
[root@master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 94m v1.19.1
四、所有node节点加⼊集群
1、开启ip转发
sysctl -w net.ipv4.ip_forward=1
2、在所有node节点操作,此命令为初始化master成功后返回的结果
kubeadm join 192.168.199.200:6443 --token 2eo635.zefoh7sqrndzdju6 --discovery-token-ca-cert-hash sha256:20fe16459d5d0f79025be51f7a800af01f7aa1fb5bd3e33b4eb37328facaff07
注意:
如果加入时显示端口占用:
[root@k8s-node2 ~]# kubeadm reset
再次即可
加入后master一直显示noready
对应的节点 systemctl restart kubelet
五、master相关操作
列举所有的token:kubeadm token list
重新生成token:kubeadm token create
查看hash值:ls /etc/kubernetes/pki/ca.crt
查看pods:kubectl get pods -n kube-system
查看异常pod信息:kubectl describe pod 异常的pod名称 -n kube-system
删除异常pod: kubectl delete pod pod名称 -n kube-system
查看pods: kubectl get pods -n kube-system
查看节点:kubectl get nodes
1.驱离k8s-node-1节点上的pod(master上)
[root@kub-k8s-master ~]# kubectl drain kub-k8s-node1 --delete-local-data --force --ignore-daemonsets驱离后 ,调度功能就禁用了该节点
想要恢复:
[root@kub-k8s-master ~] kubectl uncordon k8s-node22.删除节点(master上)
[root@kub-k8s-master ~]# kubectl delete node kub-k8s-node13.重置节点(node上-也就是在被删除的节点上)
[root@kub-k8s-node1 ~]# kubeadm reset
注1:需要把master也驱离、删除、重置,这⾥给我坑死了,第⼀次没有驱离和删除master,最后
的结果是查看结果⼀切正常,但coredns死活不能用,搞了整整1天,切勿尝试注2:master上在reset之后需要删除如下文件
rm -rf /var/lib/cni/ $HOME/.kube/config
一步到位加入集群:
第一种方法:
kubeadm token create --print-join-command
[root@k8s-master ~]# kubeadm token create --print-join-command
会自动生成一个制定了,让你在任意想添加的节点上使用该命令
kubeadm join 192.168.229.11:6443 --token plw7zn.f7pnvhok89fc0og5 --discovery-token-ca-cert-hash sha256:20fe16459d5d0f79025be51f7a800af01f7aa1fb5bd3e33b4eb37328facaff07
第⼆种⽅法:
token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0
ttl=0表示令牌用于不会过期
这种方式生成长期有效的加入集群指令