安装k8s集群

主机环境:

cnetos7

192.168.44.161 k8s-master

192.168.44.154 k8s-node01

192.168.44.155 k8s-node02

最低配置

master 2c2g

一、 主机环境配置:

  1. 设置解析

     cat >> /etc/hosts  << EOF
     192.168.44.161 k8s-master
     192.168.44.154 k8s-node01
     192.168.44.155 k8s-node02
     EOF
    
  2. 设置主机名

     hostnamectl set-hostname  k8s-master
     hostnamectl set-hostname k8s-node01
     hostnamectl set-hostname k8s-node02
    

    设置完主机名最好重启一下

  3. 设置时间同步

     yum -y install chrony 
     systemctl enable chronyd --now
    
  4. 关闭防火墙,并关闭selinux

     systemctl disable firewalld --now
     setenforce 0
     sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 
    
  5. 关闭swap分区

     swapoff -a
     注释掉/etc/fstab的信息
    

二、 安装容器运行时

  1. 安装docker engine

    k8s 1.24+ 版本已经不支持docker shim,docker engine又不支持cri 所有,需要安装一下 cri-docker进行支持cri : https://github.com/Mirantis/cri-dockerd

    1.1. 安装和配置先决条件

    转发 IPv4 并让 iptables 看到桥接流量

    执行下述指令:

    bash 复制代码
    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    overlay
    br_netfilter
    EOF
    sudo modprobe overlay
    sudo modprobe br_netfilter
    # 设置所需的 sysctl 参数,参数在重新启动后保持不变
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-iptables  = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.ipv4.ip_forward                 = 1
    EOF
    # 应用 sysctl 参数而不重新启动
    sudo sysctl --system
    # 通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:
    
    lsmod | grep br_netfilter
    lsmod | grep overlay
    # 通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:
    sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

    1.2. 安装docker engine

    https://docs.docker.com/engine/install/centos/

     curl -fsSL https://get.docker.com -o get-docker.sh
     sh get-docker.sh
    
  2. 启动自启

     systemctl enable docker --now
    
  3. 修改cgroup

    由于kubelet 和 容器运行时必须使用一致的cgroup驱动,kubelet 使用的是systemd 所以需要将docke engine的cgroup修改为 systemd

    bash 复制代码
    cat > /etc/docker/daemon.json << EOF
    {
    "exec-opts": ["native.cgroupdriver=systemd"]
    }
    EOF
     systemctl daemon-reload
     systemctl restart docker 
    

    查看cgroup驱动

     docker info | grep -i cgroup
    
    bash 复制代码
    # docker info | grep -i cgroup
    Cgroup Driver: systemd
    Cgroup Version: 1

三、 安装 docker engine 对接 cri 的垫片 cri-docker

https://github.com/Mirantis/cri-dockerd

https://github.com/Mirantis/cri-dockerd/releases

  1. 安装cri-docker

     wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.11/cri-dockerd-0.3.11-3.el7.x86_64.rpm
     rpm -ivh cri-dockerd-0.3.11-3.el7.x86_64.rpm
     systemctl enable cri-docker --now
    

    cri-docker 默认的socket文件在 /run/cri-dockerd.sock 后面会用到

  2. 配置cri-docker
    只需要配置 ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9即可

    --network-plugin:指定网络插件规范的类型,这里要使用CNI
    --pod-infra-container-image:Pod中的puase容器要使用的Image,默认为registry.k8s.io上的pause仓库中的镜像,由于安装k8s使用阿里云的镜像仓库,所以提前指定 puase 镜像

     vi /usr/lib/systemd/system/cri-docker.service
    
    bash 复制代码
    [Unit]
    Description=CRI Interface for Docker Application Container Engine
    Documentation=https://docs.mirantis.com
    After=network-online.target firewalld.service docker.service
    Wants=network-online.target
    Requires=cri-docker.socket
    
    [Service]
    Type=notify
    ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    
    # Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
    # Both the old, and new location are accepted by systemd 229 and up, so using the old location
    # to make them work for either version of systemd.
    StartLimitBurst=3
    
    # Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
    # Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
    # this option work for either version of systemd.
    StartLimitInterval=60s
    
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    
    # Comment TasksMax if your systemd version does not support it.
    # Only systemd 226 and above support this option.
    TasksMax=infinity
    Delegate=yes
    KillMode=process
  3. 重新加载cri-docker

     systemctl daemon-reload
     systemctl restart cri-docker
    

    可以使用 systemctl status cri-docker 查看启动时的参数

     systemctl status cri-docker 
    
    bash 复制代码
    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl restart cri-docker
    [root@k8s-master ~]# systemctl status cri-docker
    ● cri-docker.service - CRI Interface for Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
       Active: active (running) since 五 2024-03-15 14:01:40 CST; 16s ago
         Docs: https://docs.mirantis.com
     Main PID: 2569 (cri-dockerd)
        Tasks: 7
       Memory: 10.6M
       CGroup: /system.slice/cri-docker.service
               └─2569 /usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
    
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Hairpin mode is set to none"
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="The binary conntrack is not installed, this can cause failures in network connection cleanup."
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Loaded network plugin cni"
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Docker cri networking managed by network plugin cni"
    3月 15 14:01:40 k8s-master systemd[1]: Started CRI Interface for Docker Application Container Engine.
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Setting cgroupDriver systemd"
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
    3月 15 14:01:40 k8s-master cri-dockerd[2569]: time="2024-03-15T14:01:40+08:00" level=info msg="Start cri-dockerd grpc backend

四、 部署k8s集群

  1. 配置yum仓库(使用阿里云的镜像仓库)

    bash 复制代码
    cat >  /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
            http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
  2. 安装 kuberneters组件

     yum -y install kubeadm kubectl kubelet --disableexcludes=kubernetes
    
  • kubeadm k8s初始化集群工具
  • kubectl k8s客户端
  • kubelet 负责启动pod生命周期
  • --disableexcludes=kubernetes 禁掉除了这个kubernetes之外的别的仓库
  1. 启动 kubelet

     systemctl enable kubelet --now
    
  2. 使用 kubeadm 创建集群

    4.1. 修改初始集群默认配置文件(一下命令仅作为master节点执行)

     kubeadm config print init-defaults > init-defaults.yaml
     vim init-defaults.yaml
    
    bash 复制代码
    apiVersion: kubeadm.k8s.io/v1beta3
    bootstrapTokens:
    - groups:
      - system:bootstrappers:kubeadm:default-node-token
      token: abcdef.0123456789abcdef
      ttl: 24h0m0s
      usages:
      - signing
      - authentication
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.44.161 
      bindPort: 6443
    nodeRegistration:
      criSocket: unix:///run/cri-dockerd.sock
      imagePullPolicy: IfNotPresent
      name: node
      taints: null
    ---
    apiServer:
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: 1.28.0
    networking:
      dnsDomain: cluster.local
      serviceSubnet: 10.96.0.0/12
      podSubnet: 10.244.0.0/16
    scheduler: {}
  • advertiseAddress 集群宣告地址(master地址)

  • criSocket cri-docker 的socket文件的地址

  • imageRepository 拉取镜像的地址(这里使用的是阿里云)

  • podSubnet 设置pod的网络范围,后面安装网络插件必须和这个地址一致

    4.2. 使用初始化配置文件,下载镜像

      kubeadm config images list --config=init-defaults.yaml		# 查看需要哪些镜像
      kubeadm config images pull --config=init-defaults.yaml	# 拉取镜像
    

    镜像拉取完可以使用docker images 查看一下

      # docker images
    
    bash 复制代码
    	[root@k8s-master ~]# docker images
    REPOSITORY                                                        TAG       IMAGE ID       CREATED         SIZE
    registry.aliyuncs.com/google_containers/kube-apiserver            v1.28.0   bb5e0dde9054   7 months ago    126MB
    registry.aliyuncs.com/google_containers/kube-controller-manager   v1.28.0   4be79c38a4ba   7 months ago    122MB
    registry.aliyuncs.com/google_containers/kube-scheduler            v1.28.0   f6f496300a2a   7 months ago    60.1MB
    registry.aliyuncs.com/google_containers/kube-proxy                v1.28.0   ea1030da44aa   7 months ago    73.1MB
    registry.aliyuncs.com/google_containers/etcd                      3.5.9-0   73deb9a3f702   10 months ago   294MB
    registry.aliyuncs.com/google_containers/coredns                   v1.10.1   ead0a4a53df8   13 months ago   53.6MB
    registry.aliyuncs.com/google_containers/pause                     3.9       e6f181688397   17 months ago   744kB

4.3. 初始化集群

kubeadm init --config=init-defaults.yaml
bash 复制代码
# 看下如下信息代码集群初始化成功
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.44.161:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:f3a93042eb8511e27348f2d6c5cde0bb417966542358e81b56b2ade4a8e8b711

要使非 root 用户可以运行 kubectl,请运行以下命令, 它们也是 kubeadm init 输出的一部分:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果你是 root 用户,则可以运行:

export KUBECONFIG=/etc/kubernetes/admin.conf

4.4. 将节点加入到集群中(根据自己实际指令,在初始化完集群的最后一行)
由于使用cri-docker 与 docker结合作为容器运行时,所以在初始化集群的时候需要加上参数:--cri-socket unix:///run/cri-dockerd.sock

kubeadm join 192.168.44.161:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:f3a93042eb8511e27348f2d6c5cde0bb417966542358e81b56b2ade4a8e8b711 --cri-socket unix:///run/cri-dockerd.sock

4.5 查看节点

kubectl get nodes
bash 复制代码
# kubectl get nodes -o wide
NAME         STATUS     ROLES           AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-node01   NotReady   <none>          5m45s   v1.28.2   192.168.44.154   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://25.0.4
k8s-node02   NotReady   <none>          5m6s    v1.28.2   192.168.44.155   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://25.0.4
node         NotReady   control-plane   19m     v1.28.2   192.168.44.161   <none>        CentOS Linux 7 (Core)   3.10.0-1160.el7.x86_64   docker://25.0.4

可以看到节点的node是作为控制平面。并且状态都是未准备好。这是应为还没有安装网络插件,并且pod coredns也是需要使用到网络插件,使用 kubectl get pod -A 查看所有pod

  1. 安装网络插件

网络插件属于扩展插件

查看所有扩展:https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy

5.1. 安装 Flannel 网络插件

https://github.com/flannel-io/flannel#deploying-flannel-manually

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
bash 复制代码
# kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

5.2. 查看集群状态

# kubectl get nodes
bash 复制代码
# kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
k8s-node01   Ready    <none>          12m   v1.28.2
k8s-node02   Ready    <none>          12m   v1.28.2
node         Ready    control-plane   26m   v1.28.2
kubectl get pod -A
bash 复制代码
# kubectl get pod -A
NAMESPACE      NAME                           READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-4hrnb          1/1     Running   0          108s
kube-flannel   kube-flannel-ds-6zrb9          1/1     Running   0          108s
kube-flannel   kube-flannel-ds-lrt9k          1/1     Running   0          108s
kube-system    coredns-66f779496c-m4dsz       1/1     Running   0          26m
kube-system    coredns-66f779496c-xttg7       1/1     Running   0          26m
kube-system    etcd-node                      1/1     Running   0          26m
kube-system    kube-apiserver-node            1/1     Running   0          27m
kube-system    kube-controller-manager-node   1/1     Running   0          26m
kube-system    kube-proxy-dv8rx               1/1     Running   0          12m
kube-system    kube-proxy-fj26l               1/1     Running   0          26m
kube-system    kube-proxy-mszmh               1/1     Running   0          13m
kube-system    kube-scheduler-node            1/1     Running   0          26m
相关推荐
皮锤打乌龟5 小时前
(干货)Jenkins使用kubernetes插件连接k8s的认证方式
运维·kubernetes·jenkins
南猿北者6 小时前
docker Network(网络)
网络·docker·容器
ggaofeng9 小时前
通过命令学习k8s
云原生·容器·kubernetes
death bell9 小时前
Docker基础概念
运维·docker·容器
想学习java初学者12 小时前
Docker Compose部署Kafka(非Zookeeper)
docker·容器·kafka
qq_道可道12 小时前
K8S升级到1.24后,切换运行时导致 dind 构建镜像慢根因定位与解决
云原生·容器·kubernetes
SONGW201812 小时前
k8s拓扑域 :topologyKey
kubernetes
weixin_4381973814 小时前
K8S实现反向代理,负载均衡
linux·运维·服务器·nginx·kubernetes
郝同学的测开笔记15 小时前
云原生探索系列(十二):Go 语言接口详解
后端·云原生·go
mit6.82416 小时前
[Docker#5] 镜像仓库 | 命令 | 实验:搭建Nginx | 创建私有仓库
linux·后端·docker·云原生