k8s kubeadm单机器安装

最后编辑时间:2024/3/26

单节点配置

  1. 检查是否已经安装kubectl, kubelet, kubeadm直接输入命令确定,如果提示没有该指令则正确

    bash 复制代码
    kubectl
    kubelet
    kubeadm

    如果安装,使用apt remove和snap remove删除

    bash 复制代码
    sudo apt remove kubectl kubelet kubeadm
    sudo snap remove kubectl kubelet kubeadm
  2. 关闭防火墙

    查看防火墙状态 inactive说明是未激活

    bash 复制代码
    sudo ufw status

    开机不启动防火墙,重启即可生效

    bash 复制代码
    sudo ufw disable
  3. 确保docker已经安装,并正确配置cgroup管理器,例如

    配置docker

    bash 复制代码
    sudo mkdir -p /etc/docker
    sudo vi /etc/docker/daemon.json
    bash 复制代码
    #{
    #  "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"],
    #  "insecure-registries": ["localhost:32000"],
    #  "exec-opts": [ "native.cgroupdriver=systemd" ],
    #  "data-root": "/data/wzh/docker/image",
    #  "default-runtime": "nvidia",
    #    "runtimes": {
    #        "nvidia": {
    #            "path": "/usr/bin/nvidia-container-runtime",
    #            "runtimeArgs": []
    #        }
    #    }
    #}
    {
      "registry-mirrors": ["https://2m9jza5s.mirror.aliyuncs.com"],  # 必要
      "insecure-registries": ["localhost:32000"],  
      "exec-opts": [ "native.cgroupdriver=systemd" ],  # 必要
      "data-root": "/data/wzh/docker/image",  # 配置镜像目录
    }

    "https://???.mirror.aliyuncs.com"配成自己的,见链接

    bash 复制代码
    systemctl restart docker
    systemctl restart kubelet
  4. 安装kubectl, kubelet, kubeadm

    bash 复制代码
    # 检查这个kubernetes-cni
    sudo apt install -y kubelet=1.22.7-00 kubectl=1.22.7-00 kubeadm=1.22.7-00
    # apt list kubernetes-cni -a,可以查找有什么版本
    # sudo journalctl -u kubelet # 查看kubelet状态
    # systemctl status kubelet # 查看kubelet状态
  5. 禁用swap

    bash 复制代码
    vim /etc/default/kubelet
    # 添加下面这行
    KUBELET_EXTRA_ARGS="--fail-swap-on=false"
    systemctl daemon-reload && systemctl restart kubelet
    bash 复制代码
    vi /etc/fstab

    注释掉带 /swap.img的那行

  6. kubeadm版本不应该超过1.25.0,因为1.25.0之后k8s使用了containerd,因此注意以后不要使用apt upgrade,否则上面需要重新来一遍。

  7. 出错后首先重置:

    bash 复制代码
    sudo kubeadm reset
  8. init

    bash 复制代码
    sudo kubeadm init --kubernetes-version=v1.22.7 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.24.0.0/16 --ignore-preflight-errors=Swap --apiserver-advertise-address=0.0.0.0
  9. init成功后,提示如下,表示成功了,如果卡住了,去查上述步骤有没有跟着做:

    bash 复制代码
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.181.8.94:6443 --token 0desqq.a4oq0rwqyursqah9 \
            --discovery-token-ca-cert-hash sha256:7e181cd0f0a435adf7746b17b09b10dba5c9d83936e92fffdc1e67cbf4a9cc06

    配置登录选项

    bash 复制代码
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  10. init成功后,检查kubectl

bash 复制代码
 $ kubectl get pod -A

此时仍有两个没有打开

  1. 需要配置网络

创建文件flannel.yaml,内容如下,

yaml 复制代码
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.0
       #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: lizhenliang/flannel:v0.11.0-amd64
       #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.5
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

创建完成后执行kubectl apply -f flannel.yaml,执行很快,但是需要等待一会才会启动,一会会出现

bash 复制代码
wzh@chen:~$ kubectl get pod -A
NAMESPACE      NAME                           READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-xqpqb          1/1     Running   0          11h
kube-system    coredns-7f6cbbb7b8-w5lp8       1/1     Running   0          12h
kube-system    coredns-7f6cbbb7b8-xmps6       1/1     Running   0          12h
kube-system    etcd-chen                      1/1     Running   0          12h
kube-system    kube-apiserver-chen            1/1     Running   0          12h
kube-system    kube-controller-manager-chen   1/1     Running   0          12h
kube-system    kube-proxy-c5tks               1/1     Running   0          12h
kube-system    kube-scheduler-chen            1/1     Running   0          12h
wzh@chen:~$ kubectl get nodes
NAME   STATUS   ROLES                  AGE   VERSION
chen   Ready    control-plane,master   13h   v1.22.7

现在master可以在去除所有污点后执行(":..." -> "-" ),以下未去除污点操作,可以使用kubectl describe进行查看是否有污点:

bash 复制代码
$ kubectl taint nodes --all node-role.kubernetes.io/master-
$ kubectl taint nodes --all foo-
相关推荐
小扳3 小时前
微服务篇-深入了解 MinIO 文件服务器(你还在使用阿里云 0SS 对象存储图片服务?教你使用 MinIO 文件服务器:实现从部署到具体使用)
java·服务器·分布式·微服务·云原生·架构
aherhuo13 小时前
kubevirt网络
linux·云原生·容器·kubernetes
陌北v114 小时前
Docker Compose 配置指南
运维·docker·容器·docker-compose
catoop14 小时前
K8s 无头服务(Headless Service)
云原生·容器·kubernetes
阿里嘎多学长14 小时前
docker怎么部署高斯数据库
运维·数据库·docker·容器
小峰编程15 小时前
独一无二,万字详谈——Linux之文件管理
linux·运维·服务器·云原生·云计算·ai原生
小马爱打代码15 小时前
云原生服务网格Istio实战
云原生
liuxuzxx15 小时前
1.24.1-Istio安装
kubernetes·istio·service mesh
G_whang16 小时前
windos 安装docker
运维·docker·容器
道一云黑板报16 小时前
Flink集群批作业实践:七析BI批作业执行
大数据·分布式·数据分析·flink·kubernetes