Ansible自动化部署kubernetes集群

机器环境介绍

1.1. 机器信息介绍

|-----------------|--------------|------------------------------------------------------------------------------------------|---------|------------|
| IP | hostname | application | CPU | Memory |
| 192.168.204.129 | k8s-master01 | etcd,kube-apiserver,kube-controller-manager,kube-scheduler,kubelet,kube-proxy,containerd | 2C | 4G |
| 192.168.204.130 | k8s-worker01 | etcd,kubelet,kube-proxy,containerd | 2C | 4G |
| 192.168.204.131 | k8s-worker02 | etcd,kubelet,kube-proxy,containerd | 2C | 4G |

1.2. 规划IP地址介绍

在Kubernetes中CNI网络插件采用Calico,划分三个网段

|-----------|---------------|
| 网段信息 | 配置 |
| Pod网段 | 172.16.0.0/16 |
| Service网段 | 10.96.0.0/16 |

安装的kubernetets版本为1.28.5,Calico版本为3.26.4,容器运行环境为containerd

如果需要其他版本kuberneres,需要修改下面的脚本

  • 修改kubernetes源里面的版本
  • 修改安装master和worker节点里面定义的版本变量值

如下需要使用其他版本的CNI插件或者不同版本的calico插件,需要对网络插件部分脚本进行修改

安装配置ansible

2.1. ansible软件部署

  • 安装ansible软件

    apt update && apt install ansible -y

  • 配置ansible配置

    mkdir /etc/ansible/ && touch /etc/ansible/hosts

  • 配置/etc/ansible/hosts文件

    [master]
    192.168.204.129

    [worker]
    192.168.204.130
    192.168.204.131

  • 配置免密登录, 此过程中不要输入密码

    ssh-keygen -t rsa

  • 分发免密登录

    ssh-copy-id root@192.168.204.129
    ssh-copy-id root@192.168.204.130
    ssh-copy-id root@192.168.204.131

  • 配置hosts

    cat >> /etc/hosts <<EOF

2.2. 测试ansible连接性

  • 编写测试脚本

    cat >test_nodes.yml <<EOF

    • name: test nodes
      hosts:
      master
      worker
      tasks:
      • name: Ping nodes
        ping:
        EOF
  • 执行ansible测试

  • ansible-playbook test_node.yml

配置kubernetes脚本

3.1. 编写的kubernetes 脚本

  • 编写的install-kubernetes.yml文件内容如下
bash 复制代码
---
- name: Performance Basic Config
  hosts: 
  	master
    worker
  become: yes
  tasks:
    - name: Check if fstab contains swap
      shell: grep -q "swap" /etc/fstab
      register: fstab_contains_swap

    - name: Temp Disable swap
      command: swapoff -a
      when: fstab_contains_swap.rc == 0

    - name: Permanent Disable swap
      shell: sed -i 's/.*swap.*/#&/g' /etc/fstab
      when: fstab_contains_swap.rc == 0

    - name: Disable Swap unit-files
      shell: |
        swap_units=$(systemctl list-unit-files | grep swap | awk '{print $1}')
        for unit in $swap_units; do
          systemctl mask $unit
        done

    - name: Stop UFW service
      service:
        name: ufw
        state: stopped

    - name: Disable UFW at boot
      service:
        name: ufw
        enabled: no

    - name: Set timezone
      shell: TZ='Asia/Shanghai'; export TZ

    - name: Set timezone permanently
      shell: |
        cat >> /etc/profile << EOF
        TZ='Asia/Shanghai'; export TZ
        EOF

    - name: Create .hushlogin file in $HOME
      file:
        path: "{{ ansible_env.HOME }}/.hushlogin"
        state: touch

    - name: Install required packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - apt-transport-https
          - ca-certificates
          - curl
          - gnupg
          - lsb-release

    - name: Add Aliyun Docker GPG key
      shell: curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add

    - name: Add Aliyun Docker repository
      shell: echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker-ce.list

    - name: Add Aliyun Kubernetes GPG key
      shell: curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

    - name: Add Aliyun Kubernetes repository
      shell: echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

    - name: Set apt sources to use USTC mirrors
      shell: sed -i 's#cn.archive.ubuntu.com#mirrors.aliyun.com#g' /etc/apt/sources.list

    - name: Update apt cache
      apt:
        update_cache: yes

    - name: Load br_netfilter on start
      shell: echo "modprobe br_netfilter" >> /etc/profile

    - name: Load br_netfilter
      shell: modprobe br_netfilter

    - name: Update sysctl settings
      sysctl:
        name: "{{ item.name }}"
        value: "{{ item.value }}"
        state: present
        reload: yes
      with_items:
        - { name: "net.bridge.bridge-nf-call-iptables", value: "1" }
        - { name: "net.bridge.bridge-nf-call-ip6tables", value: "1" }
        - { name: "net.ipv4.ip_forward", value: "1" }

    - name: Install IPVS
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - ipset
          - ipvsadm

    - name: Create ipvs modules
      file:
        name: /etc/modules-load.d/ipvs.modules
        mode: 0755
        state: touch

    - name: Write ipvs.modules file
      lineinfile:
        dest: /etc/modules-load.d/ipvs.modules
        line: "#!/bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack\nmodprobe -- overlay\nmodprobe -- br_netfilter"

    - name: Execute ipvs.modules script
      shell: sh /etc/modules-load.d/ipvs.modules

    - name: Install Containerd
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - containerd.io

    - name: Generate default containerd file
      shell: containerd config default > /etc/containerd/config.toml

    - name: Config sandbox image
      shell: sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#g' /etc/containerd/config.toml

    - name: Modify Systemd Cgroup
      shell: sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

    - name: Restart Containerd
      shell: systemctl restart containerd

    - name: Systemctl enable containerd
      shell: systemctl enable containerd

- name: Install Kubernetes Master
  hosts: master
  become: yes
  vars:
    kubernetes_version: "1.28.5"
    pod_network_cidr: "172.16.0.0/16"
    service_cidr: "10.96.0.0/16"
    image_repository: "registry.aliyuncs.com/google_containers"
    calico_version: "v3.26.4"
  tasks:
    - name: Install Master kubernetes packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - kubelet={{ kubernetes_version }}-1.1
          - kubeadm={{ kubernetes_version }}-1.1
          - kubectl={{ kubernetes_version }}-1.1

    - name: Initialize Kubernetes Master
      command: kubeadm init --kubernetes-version={{ kubernetes_version }} --pod-network-cidr={{ pod_network_cidr }} --service-cidr={{ service_cidr }} --image-repository={{ image_repository }}
      register: kubeadm_output
      changed_when: "'kubeadm join' in kubeadm_output.stdout"

    - name: Save join command
      copy:
        content: |
          {{ kubeadm_output.stdout_lines [-2] }}
          {{ kubeadm_output.stdout_lines [-1] }}
        dest: /root/kubeadm_join_master.sh
      when: kubeadm_output.changed

    - name: cope join master script
      shell: sed -i 's/"//g' /root/kubeadm_join_master.sh

    - name: copy kubernetes config
      shell: mkdir -p {{ ansible_env.HOME }}/.kube && cp -i /etc/kubernetes/admin.conf {{ ansible_env.HOME }}/.kube/config

    - name: enable kubectl
      command: systemctl enable kubelet

    - name: Create calico directory
      file:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}"
        state: directory

    - name: download calico tigera-operator.yaml
      command: wget https://ghproxy.net/https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/tigera-operator.yaml -O {{ ansible_env.HOME }}/calico/{{ calico_version }}/tigera-operator.yaml

    - name: download calico custom-resources.yaml
      command: wget https://ghproxy.net/https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/custom-resources.yaml -O {{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml

    - name: set calico netwok range
      replace:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml"
        regexp: "blockSize: 26"
        replace: "blockSize: 24"

    - name: set calico ip pools
      replace:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml"
        regexp: "cidr: 192.168.0.0/16"
        replace: "cidr: {{ pod_network_cidr }}"

    - name: apply calico tigera-operator.yaml
      command: kubectl create -f {{ ansible_env.HOME }}/calico/{{ calico_version }}/tigera-operator.yaml

    - name: apply calico custom-resources.yaml
      command: kubectl create -f {{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml

    - name: set crictl config
      command: crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

- name: Install Kubernetes worker
  hosts: worker
  become: yes
  vars:
    kubernetes_version: "1.28.5"
  tasks:
    - name: Install worker kubernetes packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - kubelet={{ kubernetes_version }}-1.1
          - kubeadm={{ kubernetes_version }}-1.1

    - name: copy kubeadm join script to workers
      copy:
        src: /root/kubeadm_join_master.sh
        dest: /root/kubeadm_join_master.sh
        mode: 0755

    - name: worker join to cluster
      command: sh /root/kubeadm_join_master.sh

    - name: set crictl config
      command: crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

    - name: enable kubectl
      command: systemctl enable kubelet

执行kubernetes脚本

ansible-playbook install-kubernetes.yml

  • 集群状态

kubectl get node -o wide

  • 集群pod状态

kubectl get pod -A

相关推荐
开利网络6 小时前
综合探索数字化转型的奥秘与前景
运维·微信小程序·自动化·产品运营·数字化营销
金智维科技官方6 小时前
如何选择适合企业的高效财税自动化软件
大数据·人工智能·自动化
大白菜和MySQL6 小时前
rockylinux9.4单master节点k8s1.28集群部署
云原生·容器·kubernetes
向往风的男子7 小时前
【从问题中去学习k8s】k8s中的常见面试题(夯实理论基础)(三十)
学习·容器·kubernetes
测试19988 小时前
使用Selenium进行网页自动化
自动化测试·软件测试·python·selenium·测试工具·自动化·测试用例
马龙强_14 小时前
影刀RPE学习——自动化
学习·自动化
唐大爹14 小时前
kubeadm方式安装k8s续:
云原生·容器·kubernetes
陈小肚14 小时前
openeuler 22.03 lts sp4 使用 kubeadm 部署 k8s-v1.28.2 高可用集群
kubernetes
ly143567861914 小时前
94 、k8s之rbac
云原生·容器·kubernetes
问道飞鱼17 小时前
kubenets基础-kubectl常用命令行
kubernetes