kubeadm k8s 1.24之后版本安装,带cri-dockerd

最后编辑时间:2024/3/26

适用于1.24之后的版本

单节点配置

  1. 检查是否已经安装kubectl, kubelet, kubeadm直接输入命令确定,如果提示没有该指令则正确

    bash 复制代码
    kubectl
    kubelet
    kubeadm

    如果之前安装,首先reset,然后使用apt remove和snap remove删除

    bash 复制代码
    sudo kubeadm reset
    sudo apt remove kubectl kubelet kubeadm
    sudo snap remove kubectl kubelet kubeadm
  2. 关闭防火墙

    查看防火墙状态 inactive说明是未激活

    bash 复制代码
    sudo ufw status

    开机不启动防火墙,重启即可生效

    bash 复制代码
    sudo ufw disable
  3. 确保docker已经安装,并正确配置cgroup管理器,例如

    配置docker

    bash 复制代码
    sudo mkdir -p /etc/docker
    sudo vi /etc/docker/daemon.json
    bash 复制代码
    #{
    #  "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"],
    #  "insecure-registries": ["localhost:32000"],
    #  "exec-opts": [ "native.cgroupdriver=systemd" ],
    #  "data-root": "/data/wzh/docker/image",
    #  "default-runtime": "nvidia",
    #    "runtimes": {
    #        "nvidia": {
    #            "path": "/usr/bin/nvidia-container-runtime",
    #            "runtimeArgs": []
    #        }
    #    }
    #}
    {
      "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"],  # 必要
      "insecure-registries": ["localhost:32000"],  
      "exec-opts": [ "native.cgroupdriver=systemd" ],  # 必要
      "data-root": "/data/wzh/docker/image",  # 配置镜像目录
    }

    "https://???.mirror.aliyuncs.com"配成自己的,见链接

    bash 复制代码
    sudo systemctl restart docker
  4. 安装cri-dockerd

    以下内容适用1.24之后版本

    进入https://github.com/Mirantis/cri-dockerd/releases

    下载对应cri-dockerd

    博主的机器为ubuntu-20,因此下载cri-dockerd_0.3.12.3-0.ubuntu-focal_amd64.deb

    然后适用apt安装,注意选择当前目录./

    复制代码
    sudo apt install ./cri-dockerd_0.3.12.3-0.ubuntu-focal_amd64.deb

    然后启用cri-dockerd

    复制代码
    sudo systemctl daemon-reload
    sudo systemctl enable cri-docker.socket
    sudo systemctl start cri-docker.socket cri-docker
    cri-dockerd --version
    ls -al /var/run/cri-dockerd.sock
  5. 安装kubectl, kubelet, kubeadm

    bash 复制代码
    # 检查这个kubernetes-cni
    sudo apt install -y kubelet=1.28.2-00 kubectl=1.28.2-00 kubeadm=1.28.2-00
    # apt list kubernetes-cni -a,可以查找有什么版本
    # sudo journalctl -u kubelet # 查看kubelet状态
    # systemctl status kubelet # 查看kubelet状态
  6. 禁用swap

    bash 复制代码
    sudo vi /etc/default/kubelet
    # 添加下面这行
    KUBELET_EXTRA_ARGS="--fail-swap-on=false"
    sudo systemctl daemon-reload
    sudo systemctl restart kubelet
    bash 复制代码
    sudo vi /etc/fstab
    注释掉带 `/swap.img`的那行
  7. 出错后首先重置:

    bash 复制代码
    sudo kubeadm reset
    rm -rf ~/.kube
    sudo rm -rf /etc/cni/net.d
  8. 配置dockerd

    bash 复制代码
    sudo vi /etc/containerd/config.toml
    #如果看到了这行:
    disabled_plugins : ["cri"]
    
    #将这行用#注释或者将"cri"删除
    #disabled_plugins : ["cri"]
     
    disabled_plugins : []
    
    #重启容器运行时
    sudo systemctl restart containerd
  9. 配置镜像位置

    停止cri-docker服务:sudo systemctl stop cri-docker

    编辑vi /usr/lib/systemd/system/cri-docker.service

    找到ExecStart,在最后添加--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

    复制代码
    ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9

    重新加载服务:sudo systemctl daemon-reload

    启动cri-docker服务:sudo systemctl start cri-docker

  10. kubeadm初始化

复制代码
```bash
 sudo kubeadm init --kubernetes-version=v1.28.2 --apiserver-advertise-address=0.0.0.0 --image-repository registry.aliyuncs.com/google_containers --ignore-preflight-errors=Swap --pod-network-cidr=10.24.0.0/16 --cri-socket unix:///var/run/cri-dockerd.sock
```
  1. 出错使用下述进行debug

    复制代码
    sudo journalctl -xeu kubelet
  2. init成功后,提示如下,表示成功了:

    bash 复制代码
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.181.8.94:6443 --token 0desqq.a4oq0rwqyursqah9 \
            --discovery-token-ca-cert-hash sha256:7e181cd0f0a435adf7746b17b09b10dba5c9d83936e92fffdc1e67cbf4a9cc06

    配置登录选项

    bash 复制代码
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  3. init成功后,检查kubectl

bash 复制代码
 $ kubectl get pod -A

此时仍有两个没有打开

  1. 需要配置网络

创建文件flannel.yaml,内容如下,

yaml 复制代码
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

创建完成后执行kubectl apply -f flannel.yaml,执行很快,但是需要等待一会才会启动,一会会出现

bash 复制代码
wzh@chen:~$ kubectl get pod -A
NAMESPACE      NAME                           READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-xqpqb          1/1     Running   0          11h
kube-system    coredns-7f6cbbb7b8-w5lp8       1/1     Running   0          12h
kube-system    coredns-7f6cbbb7b8-xmps6       1/1     Running   0          12h
kube-system    etcd-chen                      1/1     Running   0          12h
kube-system    kube-apiserver-chen            1/1     Running   0          12h
kube-system    kube-controller-manager-chen   1/1     Running   0          12h
kube-system    kube-proxy-c5tks               1/1     Running   0          12h
kube-system    kube-scheduler-chen            1/1     Running   0          12h
wzh@chen:~$ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
chen   Ready    control-plane,master   13h   v1.28.2

现在master可以在去除所有污点后执行(":..." -> "-" ),以下未去除污点操作,可以使用kubectl describe进行查看是否有污点:

bash 复制代码
$ kubectl taint nodes --all node-role.kubernetes.io/master-
$ kubectl taint nodes --all foo-
相关推荐
wuxingge1 小时前
k8s部署xxl-job
容器·kubernetes
没有bug.的程序员2 小时前
微服务基础设施清单:必须、应该、可以、无需的四级分类指南
java·jvm·微服务·云原生·容器·架构
百以国际食品有限公司3 小时前
奶茶原料珍珠粉圆品质保证
云原生
Lethehong3 小时前
【探索实战】Kurator分布式云原生平台快速上手与实战指南
分布式·云原生
百以国际食品有限公司4 小时前
奶茶原料珍珠粉圆供应商
云原生
Lethehong4 小时前
【探索实战】Kurator分布式云原生平台全栈实践指南:从入门到企业级落地
分布式·云原生
weixin_462446234 小时前
【实战原创】Docker 清理指南:以 Coze Studio 为例的资源保留与清理实践(非万能方案)
docker·容器·eureka
hkNaruto4 小时前
【docker】docker exec -it 报错 open /dev/pts/0: operation not permitted
运维·docker·容器
一个想打拳的程序员4 小时前
无需复杂配置!用%20docker-webtop%20打造跨设备通用%20Linux%20桌面,加载cpolar远程访问就这么简单
java·人工智能·docker·容器
Wang's Blog4 小时前
RabbitMQ: 解析Kubernetes原理与高可用集群部署实践
分布式·kubernetes·rabbitmq