Ansible一键部署k8s1.28.2集群

环境准备:

系统:Ubuntu24.04,提前配置ssh免密以及账号sudo权限

服务器:5台

master01:192.168.1.80

master02:192.168.1.81

master03:192.168.1.82

node01:192.168.1.83

node01:192.168.1.84

控制平面负载均衡:Haproxy+keepalived

集群网络:calico

集群负载均衡器:Metallb

安装部署前提前进行更新升级所有软件包防止自动更新导致部署失败。ansible更新可参考以下配置

复制代码
- name: Kubernetes node apt baseline initialization
  hosts: all
  become: yes
  gather_facts: yes

  vars:
    ubuntu_mirror: "https://mirrors.tuna.tsinghua.edu.cn/ubuntu"
    release: "{{ ansible_distribution_release }}"
    sources_backup: "/etc/apt/sources.list.bak.{{ ansible_date_time.date }}"

  pre_tasks:
    - name: Stop and disable unattended-upgrades (avoid dpkg lock for k8s install)
      systemd:
        name: unattended-upgrades
        state: stopped
        enabled: no
      ignore_errors: yes

    - name: Wait for any apt or dpkg process to finish
      shell: |
        while pgrep -x apt >/dev/null || pgrep -x apt-get >/dev/null || pgrep -x dpkg >/dev/null; do
          sleep 5
        done
      changed_when: false

  tasks:
    - name: Backup original sources.list if exists
      copy:
        src: /etc/apt/sources.list
        dest: "{{ sources_backup }}"
        remote_src: yes
      ignore_errors: yes

    - name: Disable Ubuntu deb822 source (ubuntu.sources)
      file:
        path: /etc/apt/sources.list.d/ubuntu.sources
        state: absent

    - name: Write clean sources.list for Kubernetes nodes
      copy:
        dest: /etc/apt/sources.list
        mode: '0644'
        content: |
          deb {{ ubuntu_mirror }} {{ release }} main restricted universe multiverse
          deb {{ ubuntu_mirror }} {{ release }}-updates main restricted universe multiverse
          deb {{ ubuntu_mirror }} {{ release }}-security main restricted universe multiverse

    - name: Force apt to use IPv4
      copy:
        dest: /etc/apt/apt.conf.d/99force-ipv4
        mode: '0644'
        content: |
          Acquire::ForceIPv4 "true";

    - name: Apt update (Kubernetes node baseline)
      apt:
        update_cache: yes
        cache_valid_time: 3600

    - name: Dist upgrade system packages
      apt:
        upgrade: dist
        autoremove: yes
        autoclean: yes
      environment:
        DEBIAN_FRONTEND: noninteractive

为防止网络问题无法获取镜像,最好提前下载好镜像。

目录结构:

复制代码
~/ansible/k8s-cluster$ cat ansible.cfg
[defaults]
inventory = ./inventory/cluster.ini
remote_user = k8s
private_key_file = ~/.ssh/id_rsa
host_key_checking = False
roles_path = ./roles

[privilege_escalation]
become = True
become_method = sudo

~/ansible/k8s-cluster$ tree
.
├── ansible.cfg
├── inventory
│   ├── cluster.ini
│   └── group_vars
│       └── k8s_cluster.yml
├── playbooks
│   └── deploy-k8s.yml
└── roles
    ├── common
    │   ├── files
    │   │   ├── cni-plugins-linux-amd64-v1.8.0.tgz
    │   │   ├── cni-v3.27.4.tar
    │   │   ├── containerd-1.7.29-linux-amd64.tar.gz
    │   │   ├── controllers-v3.27.4.tar
    │   │   ├── cri-containerd-1.7.29-linux-amd64.tar.gz
    │   │   ├── libseccomp-2.5.5.tar.gz
    │   │   ├── node-v3.27.4.tar
    │   │   └── runc.amd64
    │   ├── handlers
    │   │   └── main.yml
    │   └── tasks
    │       ├── main.yml
    ├── ha
    │   ├── handlers
    │   │   └── main.yml
    │   ├── tasks
    │   │   └── main.yml
    │   └── templates
    │       ├── haproxy.cfg
    │       └── keepalived.conf
    ├── master
    │   ├── files
    │   │   ├── calico.yaml
    │   ├── tasks
    │   │   ├── main.yml
    │   └── templates
    │       └── kube-proxy-config.yaml
    ├── metallb
    │   ├── files
    │   │   ├── controller-v0.15.0.tar
    │   │   ├── metallb-native.yaml
    │   │   └── speaker-v0.15.0.tar
    │   ├── tasks
    │   │   ├── main.yml
    │   └── templates
    │       └── metallb-ip-pool.yaml
    └── node
        └── tasks
            └── main.yml

集群清单及配置参数

复制代码
~/ansible/k8s-cluster$ cat inventory/group_vars/k8s_cluster.yml
k8s_version: "1.28"
pod_network_cidr: "10.244.0.0/16"
service_cidr: "10.96.0.0/12"
kube_user: "k8s"

container_registry: "registry.aliyuncs.com/google_containers"

control_plane_vip: "192.168.1.85"
keepalived_auth_pass: "auth_025"


skip_swap_check: false

calico_manifest_url: "https://raw.githubusercontent.com/projectcalico/calico/v3.27.4/manifests/calico.yaml"


# --- MetalLB 配置 ---
metallb_enabled: true
metallb_version: "v0.15.0"
metallb_ip_pool_start: "192.168.1.70"
metallb_ip_pool_end: "192.168.1.79"
metallb_ip_pool_name: "default-pool"



~/ansible/k8s-cluster$ cat inventory/cluster.ini
[master]
k8s-master01 ansible_host=192.168.1.80
k8s-master02 ansible_host=192.168.1.81
k8s-master03 ansible_host=192.168.1.82

[node]
k8s-node01 ansible_host=192.168.1.83
k8s-node02 ansible_host=192.168.1.84

[k8s_cluster:children]
master
node

通用模块:

复制代码
~/ansible/k8s-cluster$ cat roles/common/handlers/main.yml
---
- name: reload sysctl
  command: sysctl --system

- name: restart kubelet
  systemd:
    name: kubelet
    state: restarted
    enabled: yes

主模板

复制代码
---
# ==============================
# 1. 关闭 swap 并启用 IPVS 内核支持
# ==============================
- name: Disable swap and comment in /etc/fstab
  shell: |
    swapoff -a
    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  become: yes

- name: Install ipvsadm for IPVS support
  apt:
    name: ipvsadm
    state: present
    update_cache: yes
  become: yes

- name: Define all required kernel modules for Kubernetes
  set_fact:
    k8s_kernel_modules:
      - overlay
      - br_netfilter
      - ip_vs
      - ip_vs_rr
      - ip_vs_wrr
      - ip_vs_sh
      - nf_conntrack

- name: Load required kernel modules
  modprobe:
    name: "{{ item }}"
    state: present
  loop: "{{ k8s_kernel_modules }}"
  become: yes

- name: Persist kernel modules on boot
  copy:
    content: |
      {% for mod in k8s_kernel_modules %}
      {{ mod }}
      {% endfor %}
    dest: /etc/modules-load.d/k8s-modules.conf
    mode: '0644'
  become: yes

- name: Configure sysctl settings for Kubernetes
  copy:
    content: |
      net.bridge.bridge-nf-call-iptables = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-arptables = 1
      net.ipv4.ip_forward = 1
      net.ipv6.conf.all.forwarding = 1
    dest: /etc/sysctl.d/99-k8s.conf
    mode: '0644'
  become: yes

- name: Apply sysctl settings
  command: sysctl --system
  become: yes


# ==============================
# 3. 安装基础系统依赖包
# ==============================
- name: Disable unattended-upgrades to avoid apt lock
  systemd:
    name: unattended-upgrades
    state: stopped
    enabled: false
  ignore_errors: yes

- name: Install required system packages
  apt:
    name:
      - apt-transport-https
      - ca-certificates
      - curl
      - gnupg
      - lsb-release
      - software-properties-common
    update_cache: yes
    cache_valid_time: 3600
  become: yes

# ==============================
# 4. 配置 Kubernetes APT 源
# ==============================

- name: Ensure /etc/apt/keyrings directory exists
  file:
    path: /etc/apt/keyrings
    state: directory
    mode: '0755'
  become: yes

- name: Download Kubernetes GPG key from Aliyun
  get_url:
    url: https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg
    dest: /tmp/kubernetes-apt-key.gpg
    mode: '0644'
  become: yes

- name: Convert GPG key to keyring format
  command: gpg --batch --yes --dearmor -o /etc/apt/keyrings/kubernetes-aliyun.gpg /tmp/kubernetes-apt-key.gpg
  args:
    creates: /etc/apt/keyrings/kubernetes-aliyun.gpg
  become: yes

- name: Add Kubernetes APT repository (Aliyun)
  apt_repository:
    repo: "deb [signed-by=/etc/apt/keyrings/kubernetes-aliyun.gpg] https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main"
    filename: kubernetes-aliyun
    state: present
  become: yes

# ==============================
# 5. 安装 containerd 运行时(离线方式)
# ==============================
- name: Install libseccomp2 (required by containerd)
  apt:
    name: libseccomp2
    state: present
    update_cache: yes
  become: yes

- name: Copy cri-containerd archive
  copy:
    src: cri-containerd-1.7.29-linux-amd64.tar.gz
    dest: /tmp/cri-containerd.tar.gz
  become: yes

- name: Extract cri-containerd to /
  unarchive:
    src: /tmp/cri-containerd.tar.gz
    dest: /
    remote_src: yes
  become: yes

- name: Copy runc binary
  copy:
    src: runc.amd64
    dest: /usr/local/bin/runc
    mode: '0755'
  become: yes

- name: Create CNI bin directory
  file:
    path: /opt/cni/bin
    state: directory
    mode: '0755'
  become: yes

- name: Copy CNI plugins
  copy:
    src: cni-plugins-linux-amd64-v1.8.0.tgz
    dest: /tmp/cni-plugins.tgz
  become: yes

- name: Extract CNI plugins
  unarchive:
    src: /tmp/cni-plugins.tgz
    dest: /opt/cni/bin
    remote_src: yes
  become: yes

# ==============================
# 6. 配置 containerd(关键:先停再改再启)
# ==============================

- name: Ensure /etc/containerd directory exists
  file:
    path: /etc/containerd
    state: directory
    owner: root
    group: root
    mode: '0755'
  become: yes

- name: Stop containerd before config change
  systemd:
    name: containerd
    state: stopped
  become: yes

- name: Generate default containerd config
  shell: /usr/local/bin/containerd config default > /etc/containerd/config.toml
  args:
    creates: /etc/containerd/config.toml
  become: yes

- name: Set SystemdCgroup = true (fix TOML boolean)
  lineinfile:
    path: /etc/containerd/config.toml
    regexp: '^(\s*)SystemdCgroup\s*=\s*\w+'
    line: '\1SystemdCgroup = true'
    backrefs: yes
  become: yes

- name: Set sandbox_image to Alibaba registry
  lineinfile:
    path: /etc/containerd/config.toml
    regexp: '^(\s*)sandbox_image\s*=\s*".*"'
    line: '\1sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"'
    backrefs: yes
  become: yes

- name: Start containerd with new config
  systemd:
    name: containerd
    enabled: yes
    state: started
    daemon_reload: yes
  become: yes

- name: Wait for containerd socket
  wait_for:
    path: /run/containerd/containerd.sock
    timeout: 30
  become: yes

# ==============================
# 7. 复制 Calico 离线镜像并导入到 containerd
# ==============================

- name: Copy Calico image tarballs to all nodes
  copy:
    src: "{{ item }}"
    dest: "/tmp/{{ item | basename }}"
    mode: '0644'
  loop:
    - files/cni-v3.27.4.tar
    - files/node-v3.27.4.tar
    - files/controllers-v3.27.4.tar

- name: Import Calico images via ctr
  shell: "ctr -n k8s.io images import /tmp/{{ item }}"
  loop:
    - cni-v3.27.4.tar
    - node-v3.27.4.tar
    - controllers-v3.27.4.tar
  args:
    chdir: /tmp
  become: yes
  args:
    creates: "/tmp/.{{ item }}-imported"


# ==============================
# 8. 安装 kubelet, kubeadm, kubectl
# ==============================
- name: Install Kubernetes components
  apt:
    name:
      - "kubelet={{ k8s_version }}.*"
      - "kubeadm={{ k8s_version }}.*"
      - "kubectl={{ k8s_version }}.*"
    state: present
    update_cache: yes
  become: yes

- name: Hold Kubernetes packages
  dpkg_selections:
    name: "{{ item }}"
    selection: hold
  loop:
    - kubelet
    - kubeadm
    - kubectl
  become: yes

- name: Ensure kubelet is stopped and disabled before kubeadm
  systemd:
    name: kubelet
    enabled: no
    state: stopped
  become: yes

控制平面任务清单:

复制代码
---
# ==============================
# 判断当前主机是否为第一个 master 节点(用于控制单次执行任务)
# ==============================
- name: Check if first master
  set_fact:
    is_first_master: "{{ inventory_hostname == groups['master'][0] }}"

# ==============================
# 所有 master 节点:确保 HAProxy 和 Keepalived 服务运行并开机自启(用于高可用 VIP)
# ==============================
- name: Ensure HAProxy and Keepalived are running and enabled
  systemd:
    name: "{{ item }}"
    state: started
    enabled: yes
  loop:
    - keepalived
    - haproxy
  become: yes

# ==============================
# 仅第一个 master:预拉取 Kubernetes 所需镜像
# ==============================
- name: Pre-pull images (first master)
  command: kubeadm config images pull
           --image-repository={{ container_registry }}
           --kubernetes-version=v{{ k8s_version }}.0
  when: is_first_master
  become: yes
  retries: 3
  delay: 10

# ==============================
# 初始化控制平面(仅在第一个 master 上执行一次)
# ==============================
- name: Initialize control plane
  command: >
    kubeadm init
    --image-repository={{ container_registry }}
    --control-plane-endpoint={{ control_plane_vip }}:16443
    --pod-network-cidr={{ pod_network_cidr }}
    --service-cidr={{ service_cidr }}
    --kubernetes-version=v{{ k8s_version }}.0
    --upload-certs
  args:
    creates: /etc/kubernetes/admin.conf
  register: kubeadm_init_output
  run_once: true
  delegate_to: "{{ groups['master'][0] }}"
  become: yes

- name: Ensure kubelet is started and enabled (first master)
  systemd:
    name: kubelet
    state: started
    enabled: yes
  when: is_first_master
  become: yes

# ==============================
# 仅第一个 master:生成证书密钥和节点加入命令(供其他 master/worker 使用)
# ==============================
- name: Generate certificate key
  command: kubeadm init phase upload-certs --upload-certs
  register: cert_key_cmd
  run_once: true
  delegate_to: "{{ groups['master'][0] }}"
  become: yes

- name: Set certificate key fact (global)
  set_fact:
    cert_key: "{{ cert_key_cmd.stdout | regex_search('([a-f0-9]{64})') }}"
  run_once: true

- name: Generate join command
  command: kubeadm token create --print-join-command
  register: join_cmd
  run_once: true
  delegate_to: "{{ groups['master'][0] }}"
  become: yes
  changed_when: false

- name: Set join command fact (global)
  set_fact:
    join_command: "{{ join_cmd.stdout }}"
  run_once: true

# ==============================
# 其他 master 节点:加入现有控制平面(使用证书密钥实现安全加入)
# ==============================
- name: Join additional masters
  command: >
    {{ join_command }}
    --control-plane
    --certificate-key {{ cert_key }}
  when: inventory_hostname != groups['master'][0]
  become: yes

# ==============================
# 所有 master 节点:配置本地 kubeconfig
# ==============================
- name: Create .kube dir
  file:
    path: "/home/{{ kube_user }}/.kube"
    state: directory
    owner: "{{ kube_user }}"
    group: "{{ kube_user }}"
    mode: '0700'
  become: yes

- name: Copy kubeconfig (first master)
  copy:
    src: /etc/kubernetes/admin.conf
    dest: "/home/{{ kube_user }}/.kube/config"
    owner: "{{ kube_user }}"
    group: "{{ kube_user }}"
    mode: '0600'
    remote_src: yes
  when: is_first_master
  become: yes

# ==============================
# 仅第一个 master:部署集群插件(kube-proxy + Calico CNI)
# Calico 镜像应已通过离线方式导入所有节点
# ==============================
- name: Deploy kube-proxy addon
  command: kubeadm init phase addon kube-proxy
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  when: is_first_master

- name: Patch kube-proxy image
  command: kubectl -n kube-system set image daemonset/kube-proxy \
           kube-proxy={{ container_registry }}/kube-proxy:v{{ k8s_version }}.0
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  when: is_first_master

- name: Download Calico manifest
  copy:
    src: calico.yaml
    dest: /tmp/calico.yaml
    mode: '0644'
  retries: 3
  delay: 5
  when: is_first_master

- name: Apply Calico
  command: kubectl apply -f /tmp/calico.yaml
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  when: is_first_master

- name: Wait for all nodes to be Ready
  shell: |
    kubectl --kubeconfig="/home/{{ kube_user }}/.kube/config" \
    wait --for=condition=Ready node --all --timeout=300s
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  when: is_first_master
  retries: 3
  delay: 30

# ==============================
# 可选:将 worker 节点加入命令保存到文件(便于后续手动或自动加入)
# ==============================
- name: Save join command
  copy:
    content: "{{ join_command }}"
    dest: /tmp/k8s_join_cmd.sh
    mode: '0644'
  when: is_first_master
  run_once: true
  delegate_to: "{{ groups['master'][0] }}"

~/ansible/k8s-cluster$ cat roles/master/templates/kube-proxy-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-proxy
  namespace: kube-system
  labels:
    app: kube-proxy
data:
  config.conf: |
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind: KubeProxyConfiguration
    mode: "ipvs"
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      syncPeriod: 30s
      scheduler: "rr"
      strictARP: true
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    metricsBindAddress: "0.0.0.0:10249"
    bindAddress: 0.0.0.0
    healthzBindAddress: 0.0.0.0:10256
    clientConnection:
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
    clusterCIDR: "{{ pod_network_cidr }}"
    hostnameOverride: ""
    featureGates:
      SupportIPVSProxyMode: true
  kubeconfig.conf: |
    apiVersion: v1
    kind: Config
    clusters:
    - cluster:
        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        server: https://{{ control_plane_endpoint }}
      name: default
    contexts:
    - context:
        cluster: default
        namespace: default
        user: default
      name: default
    current-context: default
    users:
    - name: default
      user:
        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token

控制平面ha配置

复制代码
~/ansible/k8s-cluster$ cat roles/ha/handlers/main.yml
---
- name: restart haproxy
  systemd:
    name: haproxy
    state: restarted
  become: yes

- name: restart keepalived
  systemd:
    name: keepalived
    state: restarted
  become: yes

~/ansible/k8s-cluster$ cat roles/ha/templates/haproxy.cfg
global
    log /dev/log local0
    log /dev/log local1 notice
    chroot /var/lib/haproxy
    stats socket /run/haproxy/admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon

defaults
    log global
    mode tcp
    option tcplog
    timeout connect 5000
    timeout client 50000
    timeout server 50000

frontend k8s-api
    bind *:16443
    mode tcp
    default_backend k8s-api-backend

backend k8s-api-backend
    mode tcp
    balance roundrobin
    option tcp-check
{% for host in groups['master'] %}
    server {{ host }} {{ hostvars[host]['ansible_host'] }}:6443 check
{% endfor %}

~/ansible/k8s-cluster$ cat roles/ha/templates/keepalived.conf
global_defs {
    router_id {{ inventory_hostname }}
}

vrrp_instance VI_1 {
    state BACKUP
#    state {{ 'MASTER' if inventory_hostname == groups['master'][0] else 'BACKUP' }}
    interface {{ ansible_default_ipv4.interface }}
    virtual_router_id 99
    priority {{ 200 - (groups['master'].index(inventory_hostname) * 10) | default(90) }}
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1217
    }
    virtual_ipaddress {
        {{ control_plane_vip }}/24
    }
}


~/ansible/k8s-cluster$ cat roles/ha/tasks/main.yml
---
- name: Install HAProxy and Keepalived
  apt:
    name:
      - haproxy
      - keepalived
    state: present
    update_cache: yes
  become: yes

- name: Set Keepalived priority (first master = 101, then 100, 99...)
  set_fact:
    keepalived_priority: "{{ 101 - (groups['master'].index(inventory_hostname) | int) }}"

- name: Ensure config directories exist
  file:
    path: "{{ item }}"
    state: directory
    mode: '0755'
  loop:
    - /etc/haproxy
    - /etc/keepalived
  become: yes

- name: Deploy HAProxy config
  template:
    src: haproxy.cfg
    dest: /etc/haproxy/haproxy.cfg
    mode: '0644'
  notify: restart haproxy

- name: Enable HAProxy service
  lineinfile:
    path: /etc/default/haproxy
    regexp: '^ENABLED='
    line: 'ENABLED=1'
  become: yes

- name: Deploy Keepalived config
  template:
    src: keepalived.conf
    dest: /etc/keepalived/keepalived.conf
    mode: '0600'
  notify: restart keepalived

- name: Start services
  systemd:
    name: "{{ item }}"
    state: stopped
    enabled: no
  loop:
    - haproxy
    - keepalived
  become: yes

node节点任务清单:

复制代码
~/ansible/k8s-cluster$ cat roles/node/tasks/main.yml
---
- name: Wait for API server
  wait_for:
    host: "{{ control_plane_vip }}"
    port: 16443
    delay: 10
    timeout: 300

- name: Fetch join command
  delegate_to: "{{ groups['master'][0] }}"
  slurp:
    src: /tmp/k8s_join_cmd.sh
  register: join_script

- name: Join node to cluster and ensure kubelet is running/enabled
  shell: |
    set -e
    {{ join_script.content | b64decode }}
    systemctl enable --now kubelet
  args:
    executable: /bin/bash

负载均衡器metallb清单:

复制代码
# -------------------------------
# 部署 MetalLB(用于 Kubernetes LoadBalancer 类型服务)
# 使用本地预下载的 YAML 清单和镜像
# -------------------------------
- name: Download MetalLB manifest
  copy:
    src: metallb-native.yaml
    dest: /tmp/metallb-native.yaml
    mode: '0644'
  run_once: true
  become: yes
#  register: metallb_manifest_download
#  until: metallb_manifest_download is succeeded

- name: Copy MetalLB controller image tarball to remote node
  copy:
    src: controller-v0.15.0.tar
    dest: /tmp/controller-v0.15.0.tar
  become: yes

- name: Copy MetalLB speaker image tarball to remote node
  copy:
    src: speaker-v0.15.0.tar
    dest: /tmp/speaker-v0.15.0.tar
  become: yes

- name: Import MetalLB images via ctr
  shell: "ctr -n k8s.io images import --all-platforms /tmp/{{ item }}.tar"
  loop:
    - controller-v0.15.0
    - speaker-v0.15.0
  args:
    chdir: /tmp
  become: yes
  args:
    creates: "/tmp/.{{ item }}-imported"

- name: Enable strictARP for kube-proxy
  shell: |
    kubectl get configmap kube-proxy -n kube-system -o yaml | \
    sed -e "s/strictARP: false/strictARP: true/" | \
    kubectl apply -f -
  delegate_to: k8s-master01
  run_once: true
  environment:
    KUBECONFIG: /etc/kubernetes/admin.conf

- name: Restart Kubelet Service
  ansible.builtin.systemd:
    name: kubelet
    state: restarted
    enabled: yes
  become: yes

# -------------------------------
# 应用 MetalLB 核心组件清单
# -------------------------------
- name: Apply MetalLB manifests
  command: kubectl apply -f /tmp/metallb-native.yaml
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  delegate_to: "{{ groups['master'][0] }}"
  run_once: true
  register: metallb_apply
  retries: 3
  delay: 5
  until: metallb_apply is succeeded

# -------------------------------
# 等待 MetalLB 所有 Pod 进入就绪状态
# -------------------------------
- name: Wait for MetalLB pods
  command: kubectl -n metallb-system wait --for=condition=ready pod --all --timeout=300s
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  delegate_to: "{{ groups['master'][0] }}"
  run_once: true

# -------------------------------
# 渲染 MetalLB IP 地址池配置
# -------------------------------
- name: Render IP pool config
  template:
    src: metallb-ip-pool.yaml
    dest: /tmp/metallb-ip-pool.yaml
    mode: '0644'
  run_once: true

# -------------------------------
# 应用 MetalLB IP 地址池配置
# -------------------------------
- name: Apply MetalLB IP pool
  command: kubectl apply -f /tmp/metallb-ip-pool.yaml
  environment:
    KUBECONFIG: "/home/{{ kube_user }}/.kube/config"
  become: yes
  delegate_to: "{{ groups['master'][0] }}"
  run_once: true
  register: metallb_ip_pool_apply
  retries: 3
  delay: 5
  until: metallb_ip_pool_apply is succeeded

ip池模板

复制代码
~/ansible/k8s-cluster$ cat roles/metallb/templates/metallb-ip-pool.yaml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: {{ metallb_ip_pool_name }}
  namespace: metallb-system
spec:
  addresses:
  - {{ metallb_ip_pool_start }}-{{ metallb_ip_pool_end }}
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: l2-advertisement
  namespace: metallb-system
spec:
  ipAddressPools:
  - {{ metallb_ip_pool_name }}

playbook:

复制代码
~/ansible/k8s-cluster$ cat playbooks/deploy-k8s.yml
---
- name: 🐧 Prepare all Ubuntu nodes (common setup)
  hosts: k8s_cluster
  roles:
    - common

- name: ⚖️Deploy HAProxy + Keepalived for HA control plane
  hosts: master
  gather_facts: yes
  roles:
    - ha

- name: 👑 Deploy Kubernetes Master (with Aliyun container images)
  hosts: master
  roles:
    - master

- name: 🖥️ Join Worker Nodes to Cluster
  hosts: node
  roles:
    - node

- name: 🌐 Deploy MetalLB for LoadBalancer Services
  hosts: k8s_cluster
  roles:
    - metallb

配置清单准备完成后,一键部署

ansible-playbook playbooks/deploy-k8s.yml -v

文件链接: https://pan.baidu.com/s/1_0G04UkytYLVxy0F0IXa3g?pwd=844p

提取码: 844p

相关推荐
Ribou3 小时前
Ubuntu 24.04.2安装配置k8s 1.35.0
linux·ubuntu·kubernetes
木二_3 小时前
附056.Kubernetes_v1.34.3三节点集群-CentOS版
云原生·容器·kubernetes·centos·containerd·ingress·longhorn
sim20205 小时前
K8s常用命令
kubernetes
别多香了5 小时前
Ansible部署、核心概念与操作指南
ansible
忍冬行者5 小时前
kubeadm部署的kubernetes集群的etcd由默认静态pod改为二级制的etcd集群
容器·kubernetes·etcd
忍冬行者5 小时前
kubernetes安装traefik ingress,替换原来的nginx-ingress
云原生·容器·kubernetes
爬也要爬着前进5 小时前
zookeeper迁移k8s
zookeeper·kubernetes·debian
篙芷5 小时前
k8s Service 暴露方式详解:ClusterIP、NodePort、LoadBalancer 与 Headless Service
云原生·容器·kubernetes
篙芷5 小时前
k8s节点绑定:nodeName与nodeSelector实战
linux·docker·kubernetes