centos搭建kubernetes集群步骤

目录

[1. 修改host](#1. 修改host)

[2. 时间同步](#2. 时间同步)

[3. 禁用firewalled](#3. 禁用firewalled)

[4. 禁用selinux](#4. 禁用selinux)

[5. 禁用swap](#5. 禁用swap)

[6. 网桥设置](#6. 网桥设置)

[7. docker安装](#7. docker安装)

[8. 安装k8s](#8. 安装k8s)

[9. 异性操作](#9. 异性操作)

[10. 配置flannel](#10. 配置flannel)​​​​​​​

1. 修改host

bash 复制代码
cat >> /etc/hosts << EOF
172.16.188.175 master
172.16.188.176 node1
172.16.188.177 node2 
EOF

cat /etc/hosts

2. 时间同步

bash 复制代码
systemctl start chronyd && systemctl enable chronyd
date

3. 禁用firewalled

bash 复制代码
systemctl stop firewalld && systemctl disable firewalld

4. 禁用selinux

bash 复制代码
setenforce 0 && sed -i 's/ASELINUX=,*/SELINUX=disabled/' /etc/selinux/config

5. 禁用swap

  • 临时关闭,重启失效
bash 复制代码
swapoff -a
  • 永久关闭
bash 复制代码
swapoff -a && sed -i '/ swap / s/^\(.*\)$/# \1/g' /etc/fstab

默认情况下,K8s为了追求高性能,不建议使用交换分区,为此它要求每个节点禁用swap,否则各个节点中的kubelet无法运行

6. 网桥设置

bash 复制代码
cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

7. docker安装

bash 复制代码
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum makecache fast

yum -y install docker-ce

systemctl enable docker&& systemctl start docker

docker -v

设置daemon.json

bash 复制代码
mkdir -p /etc/docker

tee /etc/docker/daemon.json <<-'EOF'
{
  "exec-opts":["native.cgroupdriver=systemd"],
	"registry-mirror": [
    "https://ne62ahv7.mirror.aliyuncs.com",
    "https://registry.docker-cn.com",
    "https://docker.mirrors.ustc.edu.cn",
    "http://hub-mirror.c.163.com"
  ]
 }
EOF

systemctl daemon-reload && systemctl restart docker

8. 安装k8s

  • 镜像
bash 复制代码
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key-gpg
	   http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

cat /etc/yum.repos.d/kubernetes.repo
  • 安装
bash 复制代码
yum -y install kubelet-1.23.17 kubeadm-1.23.17 kubectl-1.23.17

systemctl enable kubelet

journalctl -xefu kubelet

9. 异性操作(不同节点之间执行不同命令)

  • 设置hostname
bash 复制代码
# 不同主机上执行
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

执行完之后建议重启

bash 复制代码
reboot
  • 初始化kubernetes
bash 复制代码
# 只在主节点执行
kubeadm init \
--apiserver-advertise-address=172.16.188.175 \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.17 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all

执行结果不要关掉

init\] Using Kubernetes version: v1.23.17 \[preflight\] Running pre-flight checks \[WARNING SystemVerification\]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 20.10 \[preflight\] Pulling images required for setting up a Kubernetes cluster \[preflight\] This might take a minute or two, depending on the speed of your internet connection \[preflight\] You can also perform this action in beforehand using 'kubeadm config images pull' \[certs\] Using certificateDir folder "/etc/kubernetes/pki" \[certs\] Generating "ca" certificate and key \[certs\] Generating "apiserver" certificate and key \[certs\] apiserver serving cert is signed for DNS names \[kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master\] and IPs \[10.96.0.1 172.16.188.175

certs\] Generating "apiserver-kubelet-client" certificate and key \[certs\] Generating "front-proxy-ca" certificate and key \[certs\] Generating "front-proxy-client" certificate and key \[certs\] Generating "etcd/ca" certificate and key \[certs\] Generating "etcd/server" certificate and key \[certs\] etcd/server serving cert is signed for DNS names \[localhost master\] and IPs \[172.16.188.175 127.0.0.1 ::1

certs\] Generating "etcd/peer" certificate and key \[certs\] etcd/peer serving cert is signed for DNS names \[localhost master\] and IPs \[172.16.188.175 127.0.0.1 ::1

certs\] Generating "etcd/healthcheck-client" certificate and key \[certs\] Generating "apiserver-etcd-client" certificate and key \[certs\] Generating "sa" key and public key \[kubeconfig\] Using kubeconfig folder "/etc/kubernetes" \[kubeconfig\] Writing "admin.conf" kubeconfig file \[kubeconfig\] Writing "kubelet.conf" kubeconfig file \[kubeconfig\] Writing "controller-manager.conf" kubeconfig file \[kubeconfig\] Writing "scheduler.conf" kubeconfig file \[kubelet-start\] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" \[kubelet-start\] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" \[kubelet-start\] Starting the kubelet \[control-plane\] Using manifest folder "/etc/kubernetes/manifests" \[control-plane\] Creating static Pod manifest for "kube-apiserver" \[control-plane\] Creating static Pod manifest for "kube-controller-manager" \[control-plane\] Creating static Pod manifest for "kube-scheduler" \[etcd\] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" \[wait-control-plane\] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s \[apiclient\] All control plane components are healthy after 5.504881 seconds \[upload-config\] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace \[kubelet\] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently. \[upload-certs\] Skipping phase. Please see --upload-certs \[mark-control-plane\] Marking the node master as control-plane by adding the labels: \[node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers

mark-control-plane\] Marking the node master as control-plane by adding the taints \[node-role.kubernetes.io/master:NoSchedule

bootstrap-token\] Using token: bh0p19.4bdguik2fp3y81mv \[bootstrap-token\] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles \[bootstrap-token\] configured RBAC rules to allow Node Bootstrap tokens to get nodes \[bootstrap-token\] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials \[bootstrap-token\] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token \[bootstrap-token\] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster \[bootstrap-token\] Creating the "cluster-info" ConfigMap in the "kube-public" namespace \[kubelet-finalize\] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key \[addons\] Applied essential addon: CoreDNS \[addons\] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f \[podnetwork\].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.16.188.175:6443 --token bh0p19.4bdguik2fp3y81mv \\ --discovery-token-ca-cert-hash sha256:2327aff6edc65a0ccf11d09ffed3890cf560b56ddde13c47a88026c6e525a0c9

  • 环境配置

普通用户

bash 复制代码
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u) :$(id -g) $HOME/.kube/config

root用户

bash 复制代码
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

10. 配置flannel

  • 只限master节点

flannel (['ˈflænl]) 和calico(['kælɪkoʊ]) 都是用于k8s节点之间容器网络通信的一个k8s组件,flanne可以为不同node节点的分配不同的子网,实现容器间的跨机通信,从而实现整个kubenets层级通信。

下载

bash 复制代码
cd /opt
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

上传

bash 复制代码
kubectl apply -f /opt/kube-flannel.yml
  • 其他节点

设置hostname

bash 复制代码
# 前面执行过,可以跳过
hostnamectl set-hostname node1

执行kubeadm join命令,该命令是master节点 初始化k8s时生成的

bash 复制代码
kubeadm join 172.16.188.175:6443 --token bh0p19.4bdguik2fp3y81mv \
--discovery-token-ca-cert-hash sha256:2327aff6edc65a0ccf11d09ffed3890cf560b56ddde13c47a88026c6e525a0c9

上面命令中token默认有效期为24小时,过期后可以在master节点执行kubeadm token create --print-join-command重新创建token

环境配置

bash 复制代码
echo "export KUBECONFIG=/etc/kubernetes/kubelet.conf" >> /etc/profile
source /etc/profile

验证

bash 复制代码
kubectl get no

以上内容总结自:如何在Centos7中安装Kubernetes_哔哩哔哩_bilibili

关联可参考信息

K8s集群搭建教程-CSDN博客

Kubernetes(K8S)集群部署搭建图文教程(最全)_k8s部署-CSDN博客

部署k8s集群(k8s集群搭建详细实践版)_运维_在路上的阿帅-CSDN学习社区​​​​​​​

相关推荐
hello_ world.21 分钟前
RHCA10NUMA
linux
神秘人X7071 小时前
Linux高效备份:rsync + inotify实时同步
linux·服务器·rsync
轻松Ai享生活1 小时前
一步步学习Linux initrd/initramfs
linux
轻松Ai享生活1 小时前
一步步深入学习Linux Process Scheduling
linux
绵绵细雨中的乡音3 小时前
网络基础知识
linux·网络
Peter·Pan爱编程3 小时前
Docker在Linux中安装与使用教程
linux·docker·eureka
kunge20134 小时前
Ubuntu22.04 安装virtualbox7.1
linux·virtualbox
清溪5494 小时前
DVWA中级
linux
Sadsvit5 小时前
源码编译安装LAMP架构并部署WordPress(CentOS 7)
linux·运维·服务器·架构·centos
xiaok5 小时前
为什么 lsof 显示多个 nginx 都在 “使用 443”?
linux