k8s的CICD流水线环境搭建实验(containerd版)

k8s环境规划:

podSubnet(pod 网段) 10.20.0.0/16

serviceSubnet(service 网段): 10.10.0.0/16

实验环境规划:

操作系统:Ubuntu 20.04.3

配置: 4G 内存/2核CPU/120G 硬盘

网络: NAT

K8s集群角色 ip 主机名
控制节点(master) 192.168.121.100 master apiserver、controller-manager、scheduler、etcd、containerd、node-exporter:v1.8.1
工作节点(node1) 192.168.121.101 node1 kubelet、kube-proxy、containerd、calico、coredns、node-exporter:v1.8.1、nginx:latest
工作节点(node2) 192.168.121.102 node2 kubelet、kube-proxy、containerd、calico、coredns、node-exporter:v1.8.1、nginx:latest
工作节点(node3) 192.168.121.103 node3 kubelet、kube-proxy、containerd、calico、coredns、node-exporter:v1.8.1、nginx:latest
负载均衡(lb)主 192.168.121.104 lb1 keepalived:v2.0.19、nginx:1.28.0、node-exporter:v1.8.1
负载均衡(lb)备 192.168.121.105 lb2 keepalived:v2.0.19、nginx:1.28.0、node-exporter:v1.8.1
镜像仓库(harbor+NFS) 192.168.121.106 harbor harbor:2.4.2、node-exporter:v1.8.1、nfs-kernel-server:1:1.3.4-2.5ubuntu3.7
统一访问入口(VIP) 192.168.121.188 - https://www.test.com

1 部署containerd: 1.7.18(k8s集群)

1.1 前置准备

bash 复制代码
# 添加docker官方 GPG密钥
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

# 更新系统并安装基础依赖
apt update && apt upgrade -y
apt install -y ca-certificates curl gnupg lsb-release apt-transport-https software-properties-common

# 加载必需内核模块(overlay/br_netfilter,容器存储/网络依赖)
modprobe overlay
modprobe br_netfilter
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

# 配置内核网络参数(确保容器网络转发/端口映射正常)
cat > /etc/sysctl.d/99-containerd.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sysctl --system

1.2 添加docker官方软件源

bash 复制代码
# 查看系统发行版本代号
lsb_release -cs

# 添加软件源
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu focal stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null

1.3 更新源安装指定版本

bash 复制代码
apt update
apt install -y containerd.io=1.7.18-1

1.4 适配 systemd

容器运行时 | Kubernetes

bash 复制代码
# 生成默认配置文件(containerd 无默认配置,需手动生成)
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml

# 配置 systemd cgroup 驱动
# 结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
    
# 一键修改 SystemdCgroup = true(适配 Ubuntu 的 systemd 管理)
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
# 替换镜像仓库为国内
sudo sed -i 's/registry.k8s.io\/pause/registry.aliyuncs.com\/google_containers\/pause/g' 


# 重启服务使配置生效
systemctl restart containerd && systemctl enable containerd

containerd 遇到了无法拉取镜像的问题,解决:

bash 复制代码
root@master:/# crictl pull docker.io/library/busybox:alpine

WARN[0000] Config "/etc/crictl.yaml" does not exist, trying next: "/usr/bin/crictl.yaml" 
WARN[0000] Image connect using default endpoints: [unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead. 
E1202 00:06:16.457412   16804 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"docker.io/library/busybox:alpine\": failed to resolve reference \"docker.io/library/busybox:alpine\": failed to do request: Head \"https://registry-1.docker.io/v2/library/busybox/manifests/alpine\": dial tcp 54.89.135.129:443: connect: connection refused" image="docker.io/library/busybox:alpine"
FATA[0020] pulling image: failed to pull and unpack image "docker.io/library/busybox:alpine": failed to resolve reference "docker.io/library/busybox:alpine": failed to do request: Head "https://registry-1.docker.io/v2/library/busybox/manifests/alpine": dial tcp 54.89.135.129:443: connect: connection refused 

# 配置 crictl 指定容器运行时端点解决警告
vim /etc/crictl.yaml
--------------------------
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
pull-image-on-create: false
---------------------------
crictl info  # 无报错则配置生效

# 如果没有/etc/containerd/config.toml,先生成默认配置:
containerd config default > /etc/containerd/config.toml

 # 网络代理
 # 创建代理配置文件
mkdir -p /etc/systemd/system/containerd.service.d
cat > /etc/systemd/system/containerd.service.d/proxy.conf << EOF
[Service]
Environment="HTTP_PROXY=http://192.168.121.1:7890"
Environment="HTTPS_PROXY=http://192.168.121.1:7890"
Environment="NO_PROXY=harbor.test.com,www.test.com,localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,172.16.0.0/12,*.local,kubernetes.default,service,*.cluster.local,192.168.121.100,192.168.121.*"
EOF


# 重新加载配置并重启
systemctl daemon-reload
systemctl restart containerd

root@master1:~# crictl pull docker.io/library/nginx:alpine
Image is up to date for sha256:d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9

2 部署k8s集群

官网链接安装 kubeadm | Kubernetes

2.1 安装 kubelet kubeadm kubectl 1.32.10 每台服务器执行

bash 复制代码
# 配置内核参数(开启 IPVS/IP 转发)
# 加载内核模块
sudo tee /etc/modules-load.d/k8s.conf << EOF
overlay
br_netfilter
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF

sudo modprobe overlay && sudo modprobe br_netfilter && sudo modprobe ip_vs

# 配置sysctl参数
sudo tee /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

sudo sysctl --system  # 生效配置

# 更新apt包索引
apt-get update

# 安装k8s apt仓库需要的包
apt-get install -y apt-transport-https ca-certificates curl gpg


# 下载用于k8s软件包仓库的公共签名密钥
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# 添加 K8s apt 仓库
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

# 更新 apt 包索引,安装 kubelet、kubeadm 和 kubectl,并锁定版本
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

2.2 kubeadm初始化k8s集群

初始化过程中遇到了报错 pause版本不一致

bash 复制代码
sed -i 's#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.8"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.10"#g' /etc/containerd/config.toml/etc/containerd/config.toml

初始化 k8s 集群:

bash 复制代码
# kubeadm 初始化集群
root@master:~# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta4
kind: ClusterConfiguration
kubernetesVersion: v1.32.10
imageRepository: registry.aliyuncs.com/google_containers  # 阿里云镜像源(解决拉取镜像慢)
networking:
  podSubnet: 10.20.0.0/16  # 需和后续部署的网络插件网段匹配(如Calico适配此网段)
controlPlaneEndpoint: "192.168.121.100:6443"  # apiserver对外地址
---
apiVersion: kubeadm.k8s.io/v1beta4
kind: InitConfiguration
nodeRegistration:
  ignorePreflightErrors:
    - SystemVerification  # 忽略cgroups v1警告
  criSocket: unix:///run/containerd/containerd.sock  # containerd套接字(正确格式)
  kubeletExtraArgs:
  - name: cgroup-driver
    value: "systemd"  # 需和containerd的cgroup驱动一致(默认就是systemd)
localAPIEndpoint:
  advertiseAddress: 192.168.121.100  # master节点IP(和controlPlaneEndpoint保持一致)
  bindPort: 6443

root@master:~# kubeadm init --config=kubeadm-config.yaml 

# 初始化成功后会有添加节点命令
kubeadm join 192.168.121.100:6443 --token vzkqeo.ag5p7buea1r03g3x \
	--discovery-token-ca-cert-hash sha256:d2c3e4330239f19f0d9402bbe457482ef5e2f78dbaf32ff6ddcc781d979179d9

1.3 配置管理权限

bash 复制代码
#配置 kubectl 的配置文件 config,相当于对 kubectl 进行授权,这样 kubectl 命令可以使用这个证书对 k8s 集群进行管理
root@master:~# mkdir -p $HOME/.kube
root@master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@master:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.4 扩容工作节点

bash 复制代码
# 在每一个工作节点输入
kubeadm join 192.168.121.100:6443 --token e6p5bq.bqju9z9dqwj2ydvy \
    --discovery-token-ca-cert-hash sha256:9b3750aedaed5c1c3f95f689ce41d7da1951f2bebba6e7974a53e0b20754a09d 

1.5 把roles变成work

bash 复制代码
root@master:~# kubectl label node node1 node-role.kubernetes.io/work=work
root@master:~# kubectl label node node2 node-role.kubernetes.io/work=work
root@master:~# kubectl label node node3 node-role.kubernetes.io/work=work

1.6 安装kubernetes 网络组件-Calico

bash 复制代码
# 下载Calico配置文件(适配k8s 1.32)
root@master:~# curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.30.0/manifests/calico.yaml
# 部署calico
root@master:~# kubectl apply -f  calico.yaml

# 验证网络插件状态(等待所有Pod Running)
kubectl get pods -n kube-system -w

2 安装 k8s 可视化 UI 界面 dashboard

部署和访问 Kubernetes 仪表板(Dashboard) | Kubernetes

Kubernetes Dashboard 目前仅支持基于 Helm 的安装,因为它速度更快, 并且可以让我们更好地控制 Dashboard 运行所需的所有依赖项。

2.1 安装dashboard

bash 复制代码
# 一键安装 Helm,访问国外需要代理
# 临时启用代理
root@master1:~# export http_proxy="http://192.168.121.1:7890"
root@master1:~# export https_proxy="http://192.168.121.1:7890"
root@master1:~# export no_proxy="localhost,127.0.0.1,192.168.121.100" # 一定要拒绝本机,否则Error: Kubernetes cluster unreachable: Get "https://192.168.121.100:6443/version": EOF报错
root@master1:~# echo $http_proxy
http://192.168.121.1:7890

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
# 验证安装
root@master1:~# helm version
version.BuildInfo{Version:"v3.19.2", GitCommit:"8766e718a0119851f10ddbe4577593a45fadf544", GitTreeState:"clean", GoVersion:"go1.24.9"}

# 添加 kubernetes-dashboard 仓库
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
# 使用 kubernetes-dashboard Chart 部署名为 `kubernetes-dashboard` 的 Helm Release
root@master1:~# helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
Release "kubernetes-dashboard" does not exist. Installing it now.
NAME: kubernetes-dashboard
LAST DEPLOYED: Tue Dec  2 13:52:02 2025
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************

Congratulations! You have just installed Kubernetes Dashboard in your cluster.

To access Dashboard run:
  kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443

NOTE: In case port-forward command does not work, make sure that kong service name is correct.
      Check the services in Kubernetes Dashboard namespace using:
        kubectl -n kubernetes-dashboard get svc

Dashboard will be available at:
  https://localhost:8443
# 默认只能本机ip访问
root@master1:~# kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443
Forwarding from 127.0.0.1:8443 -> 8443
Forwarding from [::1]:8443 -> 8443
^Croot@master1:~# kubectl -n kubernetes-dashboard get svc
NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
kubernetes-dashboard-api               ClusterIP   10.102.99.173    <none>        8000/TCP   3m30s
kubernetes-dashboard-auth              ClusterIP   10.111.62.117    <none>        8000/TCP   3m30s
kubernetes-dashboard-kong-proxy        ClusterIP   10.108.191.15    <none>        443/TCP    3m30s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.102.152.162   <none>        8000/TCP   3m30s
kubernetes-dashboard-web               ClusterIP   10.96.189.177    <none>        8000/TCP   3m30s

2.2 修改svc成NodePort对外暴露端口

bash 复制代码
#修改 service type 类型变成 NodePort
kubectl edit svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard
root@master1:~# kubectl -n kubernetes-dashboard get svc
NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard-api               ClusterIP   10.102.99.173    <none>        8000/TCP        4m56s
kubernetes-dashboard-auth              ClusterIP   10.111.62.117    <none>        8000/TCP        4m56s
kubernetes-dashboard-kong-proxy        NodePort    10.108.191.15    <none>        443:32052/TCP   4m56s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.102.152.162   <none>        8000/TCP        4m56s
kubernetes-dashboard-web               ClusterIP   10.96.189.177    <none>        8000/TCP        4m56s

浏览器访问地址https://192.168.121.102:32052

2.3 通过 token 令牌访问 dashboard

dashboard/docs/user/access-control/creating-sample-user.md at master ·Kubernetes/dashboard --- dashboard/docs/user/access-control/creating-sample-user.md at master · kubernetes/dashboard创建示例用户

bash 复制代码
root@master1:~# vim admin.yaml
-----------------------
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
  ---------------------
root@master1:~# kubectl apply -f admin.yaml 
serviceaccount/admin-user created
root@master1:~# vim clusterRole.yaml
------------------------
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
  -------------------------
root@master1:~# kubectl apply -f clusterRole.yaml 
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@master1:~# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6ImRKb3A0UUdCVGFPeXc1UUtYZVIxV3h1YmJRRFNkNzdrT2NBU0otVjRUckUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY0NjU4OTI3LCJpYXQiOjE3NjQ2NTUzMjcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMzM3ZTM1NmUtN2VkYS00MTI2LWEyNDAtMDYxYzc2MWJkYmYzIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMzFjZjBiMzctNmU5Yy00YjYzLTg1ZjktNzJmMjFlNmRkYmNjIn19LCJuYmYiOjE3NjQ2NTUzMjcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.j4bpcnO8rj2xfd1d4OcaY7G8j2UfQ1qavArieQEj1xxvx-nA-gHLLjL7U0fBevS-hslJSaSz7mOibhqksdXDz3jd_RtdGjjn-pfkx3AjaDZcD8inQuYMwXVgFVymEqye6E8IIFr4r7yIN53VRzFHTXvr4K0oDhQxM6K2Cr-Xo_WiGgTa6MGks766jZxljZDsM4OQcLUJ-fDeg0zetPeCEk8HwD9YHN9BoHnk2fQxarws4iu7AV2nZgjP57CwGNix-XzT563jazH50A4AHIZzNAN9SfAR8ouau3tayc98AZf3ehi624epmKsJFe8Xup9qR04e1DPpkw7sljsasuwpgQ

# 复制到浏览器token登录处即可登录
 

3 部署keepalived+nginx(无变化)

3.1 安装keepalived+nginx(lb1/lb2节点)

bash 复制代码
root@lb1:~# apt install -y keepalived nginx

3.2 配置nginx(lb1/lb2节点)

bash 复制代码
root@lb1:~# vim /etc/nginx/nginx.conf
-----------------------------------------------
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events { 
    worker_connections 1024; 
}
stream {
    log_format main '$remote_addr [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
    access_log /var/log/nginx/k8s-access.log main;
    upstream k8s-apiserver {
        server 192.168.121.100:6443;  # master的APIServer
    }
    server {
        listen 6443;
        proxy_pass k8s-apiserver;
        proxy_timeout 300s;
        proxy_connect_timeout 10s;
    }
    # 代理K8s Ingress 443端口
    upstream k8s_ingress_https {
        server 192.168.121.101:443;  # K8s节点1
        server 192.168.121.102:443;  # K8s节点2
        server 192.168.121.103:443;  # K8s节点3
        least_conn;  # 最少连接负载均衡
    }
    server {
        listen 443;  
        proxy_pass k8s_ingress_https;
        proxy_timeout 300s;
        proxy_connect_timeout 10s;
    }
}

# 处理80端口HTTP请求,重定向test.com到HTTPS
http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    # test.com 80端口重定向到HTTPS
    server {
        listen 80;
        server_name www.test.com;  # 仅匹配test.com域名
        return 308 https://$host$request_uri;  # 保留完整请求路径
    }
    server {
        listen 80 default_server;
        server_name _;
        return 404;
    }
}
-----------------------------------------------------------------
# 检查nginx配置文件
root@lb1:~# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

# 启动nginx设置开机自启动并检查状态
root@lb1:~# systemctl restart nginx && systemctl enable nginx && systemctl status nginx

3.3 配置keepalived(lb1/lb2节点)

lb1 节点

bash 复制代码
root@lb1:~# vim /etc/keepalived/keepalived.conf
------------------------------------------------
global_defs { router_id LVS_DEVEL }
vrrp_script check_nginx {
    script "/usr/bin/pgrep nginx"
    interval 2
    weight 2
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0  
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication { auth_type PASS; auth_pass 1111; }
    virtual_ipaddress { 192.168.121.188/24; }  # VIP
    track_script { check_nginx; }
}

lb2 节点

bash 复制代码
root@lb2:~# vim /etc/keepalived/keepalived.conf
------------------------------------
global_defs { router_id LVS_DEVEL }
vrrp_script check_nginx {
    script "/usr/bin/pgrep nginx"
    interval 2
    weight 2
}
vrrp_instance VI_1 {
    state BACKUP
    interface eth0 
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication { auth_type PASS; auth_pass 1111; }
    virtual_ipaddress { 192.168.121.188/24; }
    track_script { check_nginx; }
}
--------------------------------------------
# 启动服务设置开机自启动并检查启动状态
root@lb2:~# systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived

3.4 测试VIP是否正常漂移

关闭lb1的keepalived查看lb2的网卡信息

可以看到lb1模拟宕机后,VIP正常漂移到了lb2节点对外提供服务

4 部署harbor(重新生成证书)

上传安装包至harbor服务器

解压安装包

bash 复制代码
root@harbor:/opt# tar -xvf harbor-offline-installer-v2.4.2.tgz
root@harbor:/opt# cd harbor
root@harbor:/opt/harbor# cp harbor.yml.tmpl harbor.yml    # 复制模板文件
# 生成ssl证书


root@harbor:/opt/harbor# mkdir certs
root@harbor:/opt/harbor# cd certs/
------------------已废弃,--------------------后面需要修改方式
#root@harbor:/opt/harbor/certs# openssl genrsa -out ./harbor-ca.key
#root@harbor:/opt/harbor/certs# openssl req -x509 -new -nodes -key ./harbor-ca.key -subj "/CN=harbor.test.com" -days 7120 -out ./harbor-ca.crt
#root@harbor:/opt/harbor/certs# ls
#harbor-ca.crt  harbor-ca.key
------------------------------

root@harbor:/opt/harbor/certs# cd ..
root@harbor:/opt/harbor# vim harbor.yml
----------------------------------------
# 修改主机域名
hostname: harbor.test.com
# 修改私钥公钥path路径
https:
  port: 443
  certificate: /opt/harbor/certs/harbor-ca.crt
  private_key: /opt/harbor/certs/harbor-ca.key
# 修改harbor密码
harbor_admin_password: 123456

安装harbor

bash 复制代码
root@harbor:/opt/harbor# ./install.sh --with-trivy --with-chartmuseum

安装完成之后可以在浏览器输入之前设置的域名harbor.test.com访问web页面

5 部署ingress-nginx 控制器

bash 复制代码
root@master:~# mkdir -p yaml/ingress
root@master:~# cd yaml/ingress/
# 下载官方ingress-nginx:v1.2.0
root@master:~/yaml/ingress# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/baremetal/deploy.yaml
root@master:~/yaml/ingress# ls
deploy.yaml
root@master:~/yaml/ingress# mv deploy.yaml ingress-nginx-controller-daemonset.yaml
# 修改yaml文件
root@master:~/yaml/ingress# vim ingress-nginx-controller-daemonset.yaml
-------------------------------
# 注释service
#apiVersion: v1
#kind: Service
#metadata:
#  labels:
#    app.kubernetes.io/component: controller
#    app.kubernetes.io/instance: ingress-nginx
#    app.kubernetes.io/name: ingress-nginx
#    app.kubernetes.io/part-of: ingress-nginx
#    app.kubernetes.io/version: 1.2.0
#  name: ingress-nginx-controller
#  namespace: ingress-nginx
#spec:
#  ipFamilies:
#  - IPv4
#  ipFamilyPolicy: SingleStack
#  ports:
#  - appProtocol: http
#    name: http
#    port: 80
#    protocol: TCP
#    targetPort: http
#  - appProtocol: https
#    name: https
#    port: 443
#    protocol: TCP
#    targetPort: https
#  selector:
#    app.kubernetes.io/component: controller
#    app.kubernetes.io/instance: ingress-nginx
#    app.kubernetes.io/name: ingress-nginx
#  type: NodePort
# 修改资源类型为Daemonset所有节点部署,使用宿主机网络,可以通过node节点主机名+端口号访问
apiVersion: apps/v1
#kind: Deployment
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.2.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
      hostNetwork: true #使用宿主机网络
      hostPID: true #使用宿主机Pid
------------------------------------
# 更新资源配置文件
root@master:~/yaml/ingress# kubectl apply -f ingress-nginx-controller-daemonset.yaml
# 查看pod启动状态
root@master1:~/yaml/ingress# kubectl get pod -n ingress-nginx -o wide -w
NAME                                   READY   STATUS              RESTARTS   AGE    IP                NODE    NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-6h4mn   0/1     Completed           0          108s   10.20.104.2       node2   <none>           <none>
ingress-nginx-admission-patch-lhpjj    0/1     Completed           1          108s   10.20.166.131     node1   <none>           <none>
ingress-nginx-controller-7xrlr         0/1     ContainerCreating   0          108s   192.168.121.103   node3   <none>           <none>
ingress-nginx-controller-dbvft         0/1     ContainerCreating   0          108s   192.168.121.102   node2   <none>           <none>
ingress-nginx-controller-vwzg9         0/1     ContainerCreating   0          108s   192.168.121.101   node1   <none>           <none>
ingress-nginx-controller-vwzg9         0/1     Running             0          2m49s   192.168.121.101   node1   <none>           <none>
ingress-nginx-controller-7xrlr         0/1     Running             0          2m49s   192.168.121.103   node3   <none>           <none>
ingress-nginx-controller-dbvft         0/1     Running             0          2m49s   192.168.121.102   node2   <none>           <none>
ingress-nginx-controller-dbvft         1/1     Running             0          3m      192.168.121.102   node2   <none>           <none>
ingress-nginx-controller-vwzg9         1/1     Running             0          3m1s    192.168.121.101   node1   <none>           <none>
ingress-nginx-controller-7xrlr         1/1     Running             0          3m1s    192.168.121.103   node3   <none>           <none>

6 SSL 证书配置(自签名)

bash 复制代码
# 创建证书目录
root@master:~# mkdir -p /tmp/ssl && cd /tmp/ssl
# 生成自签证书
root@master:~# openssl req -x509 -nodes -days 365 -newkey rsa:2048 \
  -keyout tls.key \
  -out tls.crt \
  -subj "/CN=www.test.com/O=test"
# 存入K8s Secret
root@master:~# kubectl create secret tls test-tls --key tls.key --cert tls.crt
# 检查secret内容
root@master:~# kubectl describe secret test-tls
Name:         test-tls
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1164 bytes
tls.key:  1704 bytes

7 ConfigMap 配置(欢迎页 + Harbor 认证)

  1. 欢迎页cm

    bash 复制代码
    # 创建目录
    root@master:~/yaml# mkdir configmap
    root@master:~/yaml/configmap# cd configmap
    # 创建cm文件
    root@master:~/yaml/configmap# vim nginx-welcome-cm.yaml
    -------------------------------------------
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: welcome-nginx-cm
      namespace: default
    data:
      index.html: |
        <!DOCTYPE html>
        <html>
        <head><title>Welcome</title></head>
        <body><h1>welcome to test.com</h1></body>
        </html>

2.harbo认证cm

bash 复制代码
# base 64加密
root@master:~/yaml/configmap# echo "admin:123456" | base64
YWRtaW46MTIzNDU2Cg==
# 创建cm文件
root@master:~/yaml/configmap# vim harbor-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: harbor-auth-cm
  namespace: default
data:
  .dockerconfigjson: |
    {
      "auths": {
        "harbor.test.com": {
          "username": "admin",
          "password": "123456",
          "auth": "YWRtaW46MTIzNDU2Cg=="
        }
      }
    }

3.应用cm并转为镜像拉取secret

bash 复制代码
root@master:~/yaml/configmap# kubectl apply -f .
# cm转secret
kubectl create secret generic harbor-registry-secret   --from-literal=.dockerconfigjson="$(kubectl get cm harbor-auth-cm -o jsonpath='{.data.\.dockerconfigjson}')"   --type=kubernetes.io/dockerconfigjson

8 部署动态供应 NFS PV(存储 nginx Log)

  1. 在harbor节点搭建NFS服务器
bash 复制代码
# 安装nfs服务
root@harbor:~# apt install -y nfs-kernel-server
# 创建NFS共享目录
root@harbor:~# mkdir -p /data/nfs/nginx-logs
root@harbor:~# chmod -R 777 /data/nfs/nginx-logs
# 配置NFS共享(编辑/etc/exports)
root@harbor:~# echo "/data/nfs/nginx-logs *(rw,sync,no_root_squash,no_subtree_check)" >> /etc/exports
# 生效配置
root@harbor:~# exportfs -r
# 验证NFS共享
root@harbor:~# showmount -e localhost

部署 NFS Provisioner

bash 复制代码
root@master:~/yaml# mkdir StorageClass
root@master:~/yaml# cd StorageClass
root@master:~/yaml/StorageClass# vim nfs-provisioner.yaml
-------------------------------------
apiVersion: v1
kind: Namespace
metadata:
  name: nfs-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner-sa
  namespace: nfs-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-role
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: nfs-provisioner-bind
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nfs-provisioner-role
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner-sa
    namespace: nfs-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-provisioner
  namespace: nfs-storage
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-provisioner
  template:
    metadata:
      labels:
        app: nfs-provisioner
    spec:
      serviceAccountName: nfs-provisioner-sa
      containers:
        - name: nfs-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          imagePullPolicy: IfNotPresent
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner  # 存储类的provisioner需匹配此值
            - name: NFS_SERVER
              value: 192.168.121.106  # NFS服务器IP(harbor节点IP)
            - name: NFS_PATH
              value: /data/nfs/nginx-logs  # NFS共享目录
          volumeMounts:
            - name: nfs-volume
              mountPath: /persistentvolumes
      volumes:
        - name: nfs-volume
          nfs:
            server: 192.168.121.106  # NFS服务器IP
            path: /data/nfs/nginx-logs
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-sc
provisioner: nfs-provisioner  # 匹配Deployment中的PROVISIONER_NAME
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
  archiveOnDelete: "false"  # 删除PVC时不归档数据

更新资源配置文件

bash 复制代码
root@master:~/yaml/StorageClass# kubectl apply -f  nfs-provisioner.yaml
  1. 创建rwx模式的pvc
bash 复制代码
root@master:~/yaml/StorageClass# vim nginx-accesslog-nfs-pvc.yaml
---------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-accesslog-pvc
  namespace: default
spec:
  accessModes: [ReadWriteMany]  # 支持多节点/多Pod读写
  resources: { requests: { storage: 1Gi } }
  storageClassName: nfs-sc  # NFS存储类
# 更新资源配置文件
root@master:~/yaml/StorageClass# kubectl apply -f nginx-accesslog-nfs-pvc.yaml

9 部署nginx应用 (deployment+svc+HPA)

  1. 在k8s集群各个节点建立存放ca证书的目录
bash 复制代码
root@master:~# mkdir -p /etc/containerd/certs.d/harbor.test.com
root@node1:~# mkdir -p /etc/containerd/certs.d/harbor.test.com
root@node2:~# mkdir -p /etc/containerd/certs.d/harbor.test.com
root@node3:~# mkdir -p /etc/containerd/certs.d/harbor.test.com
  1. 分发harbor ca证书到k8s集群各个节点
bash 复制代码
root@harbor:/opt/harbor/certs# scp harbor-ca.crt master:/etc/containerd/certs.d/harbor.test.com/ca.crt
root@harbor:/opt/harbor/certs# scp harbor-ca.crt node1:/etc/containerd/certs.d/harbor.test.com/ca.crt
root@harbor:/opt/harbor/certs# scp harbor-ca.crt node2:/etc/containerd/certs.d/harbor.test.com/ca.crt
root@harbor:/opt/harbor/certs# scp harbor-ca.crt node3:/etc/containerd/certs.d/harbor.test.com/ca.crt
  1. 配置文件配置harbor.test.com镜像仓库
bash 复制代码
root@master:/etc/containerd# vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]
         "harbor.test.com" = {auth = "YWRtaW46MTIzNDU2"}
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.test.com".tls]
          ca_file = "/etc/containerd/certs.d/harbor.test.com/harbor-ca.crt"    # 证书路径

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.test.com"]
          endpoint = ["https://harbor.test.com"]    # 指定私有仓库 harbor.test.com 的访问地址
          tls = false
          
 
  1. 在certs.d目录下创建auth.json
bash 复制代码
root@master:/etc/containerd# vim /etc/containerd/certs.d/harbor.test.com/auth.json
{
  "auths": {
    "harbor.test.com": {
      "username": "admin",
      "password": "123456",
      "auth": "YWRtaW46MTIzNDU2"
    }
  }
}
# 重启生效
systemctl restart containerd
  1. containerd拉取nginx镜像并上传到harbo镜像仓库

遇到报错:

报错1

bash 复制代码
root@master1:/etc/containerd# crictl pull harbor.test.com/library/nginx:latest
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService
# :Kubernetes尝试调用 CRIv1 版本的ImageService接口,但底层容器运行时 containerd不支持 / 未启用 CRI v1,导致 RPC 调用返回 "未实现
# 解决方案
root@master:/etc/containerd# vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri"]
disabled = false      # 启用 containerd CRI 插件的镜像仓库管理功能
config_path = "/etc/containerd/certs.d"     # 指定 containerd 读取镜像仓库证书路径 

[plugins."io.containerd.grpc.v1.cri".registry]
      # config_path = ""    一定要注释这个太坑了,写在上面

      [plugins."io.containerd.grpc.v1.cri".registry.auths]
         "harbor.test.com" = {auth = "YWRtaW46MTIzNDU2"}
      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."harbor.test.com".tls]
          ca_file = "/etc/containerd/certs.d/harbor.test.com/harbor-ca.crt"    # 证书路径

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."harbor.test.com"]
          endpoint = ["https://harbor.test.com"]    # 指定私有仓库 harbor.test.com 的访问地址
          tls = false

报错2

bash 复制代码
# Harbor 的 TLS 证书仅使用了传统的Common Name(CN)字段,而新版 TLS 库已弃用仅靠 CN 验证证书的方式,要求证书必须包含SAN(Subject Alternative Name)字段
root@master1:/etc/containerd/certs.d/harbor.test.com# crictl pull harbor.test.com/library/nginx:latest
E1202 15:28:37.414817   74526 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unest: Head \"https://harbor.test.com/v2/library/nginx/manifests/latest\": tls: failed to verify certificate: x509: certificate relies on le
FATA[0000] pulling image: failed to pull and unpack image "harbor.test.com/library/nginx:latest": failed to resolve reference "harbor.testrtificate: x509: certificate relies on legacy Common Name field, use SANs instead
# 解决方案 harbor重新生成包含san字段和tls证书
# 1. 生成ca密钥
root@harbor:/opt/harbor/certs# openssl genrsa -out harbor-ca.key 4096

# 2. 创建SAN配置文件(ca-san.cnf)
root@harbor:/opt/harbor/certs# vim ca-san.cnf
------------------------
[req]
prompt = no
distinguished_name = req_distinguished_name
req_extensions = v3_req

[req_distinguished_name]
CN = harbor.test.com

[v3_req]
basicConstraints = CA:TRUE,pathlen:1
keyUsage = critical, digitalSignature, cRLSign, keyCertSign
subjectAltName = @alt_names

[alt_names]
DNS.1 = harbor.test.com
----------------------
# 用 CA 根证书签发Harbor 服务器证书
# 1. 生成服务器私钥
root@harbor:/opt/harbor/certs# openssl genrsa -out harbor-server.key 4096
# 2. 创建服务器证书请求(CSR)的配置文件
root@harbor:/opt/harbor/certs# vim server-san.cnf
[req]
prompt = no
distinguished_name = req_distinguished_name
req_extensions = v3_req

[req_distinguished_name]
CN = harbor.test.com

[v3_req]
basicConstraints = CA:FALSE
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth, clientAuth
subjectAltName = @alt_names

[alt_names]
DNS.1 = harbor.test.com  


# 3. 生成CSR
root@harbor:/opt/harbor/certs#openssl req -new -key harbor-server.key -config server-san.cnf -out harbor-server.csr

# 4. 用CA根证书签发服务器证书
root@harbor:/opt/harbor/certs# openssl x509 -req -in harbor-server.csr -CA harbor-ca.crt -CAkey harbor-ca.key -CAcreateserial \
  -out harbor-server.crt -days 7120 -extensions v3_req -extfile server-san.cnf

# 配置 Harbor 使用新的服务器证书
root@harbor:/opt/harbor/certs# cd /opt/harbor
root@harbor:/opt/harbor# vi harbor.yml
----------------------
 # 服务器证书路径(对应上面生成的harbor-server.crt)
  certificate: /opt/harbor/certs/harbor-server.crt
  # 服务器私钥路径(对应上面生成的harbor-server.key)
  private_key: /opt/harbor/certs/harbor-server.key
  -----------------------------------
# 3:替换harbor证书
# 停止harbor
root@harbor:/opt/harbor# docker-compose down -v
root@harbor:/opt/harbor# ./prepare
root@harbor:/opt/harbor# docker-compose up -d
# 替换证书
root@harbor:/opt/harbor# cp harbor.test.com.key harbor-ca.key

# 重新分发到k8s集群
root@harbor:/opt/harbor/certs# scp harbor-ca.crt master:/etc/containerd/certs.d/harbor.test.com/ca.crt
root@harbor:/opt/harbor/certs# scp harbor-ca.crt node1:/etc/containerd/certs.d/harbor.test.com/ca.crt
root@harbor:/opt/harbor/certs# scp harbor-ca.crt node2:/etc/containerd/certs.d/harbor.test.com/ca.crt
root@harbor:/opt/harbor/certs# scp harbor-ca.crt node3:/etc/containerd/certs.d/harbor.test.com/ca.crt

# 重启containerd
systemctl restart containerd

测试拉取harbo镜像

bash 复制代码
root@master1:/etc/containerd/certs.d/harbor.test.com# crictl pull harbor.test.com/jenkins/jenkins:v1
Image is up to date for sha256:aedd7093d2e83eedd93079094c9e3d7401b723e573a67d67d801881a961ab6a0

完美解决
bash 复制代码
root@master1:~/yaml/configmap# ctr images pull docker.io/library/nginx:latest
docker.io/library/nginx:latest:                                                   resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:de57a609c9d5148f10b38f5c920d276e9e38b2856fe16c0aae1450613dc12051:    done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5:   done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:0e4bc2bd6656e6e004e3c749af70e5650bac2258243eb0949dea51cb8b7863db:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:b5feb73171bf1bcf29fdd1ba642c3d30cdf4c6329b19d89be14d209d778c89ba:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:108ab82928207dabd9abfddbc960dd842364037563fc560b8f6304e4a91454fe:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:53d743880af45adf9f141eec1fe3a413087e528075a5d8884d6215ddfdd2b806:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:77fa2eb0631772679b0e48eca04f4906fba5fe94377e01618873a4a1171107ce:    done           |++++++++++++++++++++++++++++++++++++++| 
layer-sha256:192e2451f8751fb74549c932e26a9bcfd7b669fe2f5bd8381ea5ac65f09b256b:    done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 14.6s                                                                    total:  56.6 M (3.9 MiB/s)                                       
unpacking linux/amd64 sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42...
done: 1.626643703s	

root@master1:~/yaml/configmap# ctr images ls
REF                            TYPE                                    DIGEST                                                                  SIZE     PLATFORMS                                                                                              LABELS 
docker.io/library/nginx:latest application/vnd.oci.image.index.v1+json sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42 57.0 MiB linux/386,linux/amd64,linux/arm/v5,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/riscv64,linux/s390x - 
# 打标签
root@master1:~/yaml/configmap# ctr images tag docker.io/library/nginx:latest harbor.test.com/library/nginx:test
harbor.test.com/library/nginx:test

root@master:~# ctr images push harbor.test.com/library/nginx:latest    # 死活push不上一直报错

ctr: content digest sha256:f435df576ad8d14dc479a4c624c7f4e762d5d41a6a073bba3455392c2584161d: not found

# 装一个nerdctl就解决了,一下push成功
# 下载对应版本
curl -L https://github.com/containerd/nerdctl/releases/download/v1.7.0/nerdctl-1.7.0-linux-amd64.tar.gz -o nerdctl.tar.gz

# 解压到系统路径
sudo tar Cxzvf /usr/local/bin nerdctl.tar.gz nerdctl

# 验证安装
nerdctl version

root@master1:~# nerdctl login harbor.test.com -u admin -p 123456
WARN[0000] WARNING! Using --password via the CLI is insecure. Use --password-stdin. 
WARNING: Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

root@master1:~# nerdctl push harbor.test.com/library/nginx:test
INFO[0000] pushing as a reduced-platform image (application/vnd.oci.image.index.v1+json, sha256:1e271dff21268b9bb44a8334b809dccc7463af292140cb560908c2cea7e7cc2d) 
index-sha256:1e271dff21268b9bb44a8334b809dccc7463af292140cb560908c2cea7e7cc2d:    done           |++++++++++++++++++++++++++++++++++++++| 
manifest-sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541: done           |++++++++++++++++++++++++++++++++++++++| 
config-sha256:60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5:   done           |++++++++++++++++++++++++++++++++++++++| 
elapsed: 0.5 s                   
  1. 创建deployment文件
bash 复制代码
root@master:~/yaml# mkdir deploy
root@master:~/yaml# cd deploy
root@master:~/yaml/deploy# vim nginx-deployment.yaml
-------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
spec:
  replicas: 3  
  selector:
    matchLabels:
      app: nginx
  # 滚动更新配置
  strategy:
    type: RollingUpdate  
    rollingUpdate:
      maxSurge: 1        
      maxUnavailable: 0  
  template:
    metadata:
      labels:
        app: nginx
    spec:
      imagePullSecrets:
      - name: harbor-registry-secret  # 拉取Harbor私有镜像的密钥
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:  # 软亲和:优先分散,不匹配也能调度
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values: [nginx]
              topologyKey: kubernetes.io/hostname
      containers:
      - name: nginx
        image: harbor.test.com/library/nginx:latest
        ports:
        - containerPort: 80
        # 挂载ConfigMap
        volumeMounts:
        - name: welcome-page
          mountPath: /usr/share/nginx/html/index.html 
          subPath: index.html
        - name: accesslog
          mountPath: /var/log/nginx
        # 资源限制
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
        # 健康检查
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
      volumes:
      - name: welcome-page
        configMap:
          name: welcome-nginx-cm
          items:
          - key: index.html
            path: index.html
      - name: accesslog
        persistentVolumeClaim:
          claimName: nginx-accesslog-pvc

-----------------------------------------
  1. 创建service文件
bash 复制代码
root@master:~/yaml/deploy# vim nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: default
spec:
  selector: { app: nginx }
  ports: [{ port: 80, targetPort: 80 }]
  type: ClusterIP

新增HPA

  1. 创建hpa文件
bash 复制代码
# 下载官方部署文件
root@master1:~/yaml/deploy# wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 修改Metrics Server部署,跳过kubelet TLS验证
root@master1:~/yaml/hpa# vim components.yaml

spec:
      containers:
      - args:
        - --kubelet-insecure-tls     # 添加此行
        - --cert-dir=/tmp
        - --secure-port=10250
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s

root@master1:~/yaml/deploy# vim nginx-hpa.yaml
------------------------
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
  namespace: default
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment       
    name: nginx-deployment 
  minReplicas: 2        
  maxReplicas: 10    
  metrics:
  - type: Resource         
    resource:
      name: cpu           
      target:
        type: Utilization  # 基于资源使用率
        averageUtilization: 50  # 目标 CPU 平均使用率:50%
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 30  # 扩容前稳定观察时间(避免抖动)
      policies:
      - type: Percent                 # 按百分比扩容
        value: 50                     # 每次扩容50%的当前副本数
        periodSeconds: 60             # 扩容间隔(60秒内仅触发一次)
    scaleDown:
      stabilizationWindowSeconds: 600 # 缩容前稳定观察时间(默认5分钟,避免缩容过快)
      policies:
      - type: Percent
        value: 30
        periodSeconds: 60

--------------------------
  1. 更新资源配置文件
bash 复制代码
root@master:~/yaml/deploy# kubectl apply -f .
# 检查pod启动状态
root@master:~/yaml/deploy#  kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP              NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-56fcf7c4c9-4tcpg   1/1     Running   0          1m   10.20.104.10    node2   <none>           <none>
nginx-deployment-56fcf7c4c9-6sjf9   1/1     Running   0          1m   10.20.135.14    node3   <none>           <none>
nginx-deployment-56fcf7c4c9-n6zjl   1/1     Running   0          1m   10.20.166.144   node1   <none>           <none>
# 查看hpa状态
root@master1:~/yaml/hpa# kubectl get hpa nginx-hpa
NAME        REFERENCE                     TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
nginx-hpa   Deployment/nginx-deployment   cpu: 0%/50%   2         10        3          32s

10 ingress规则(小改 版本兼容性问题)

创建ingress文件

bash 复制代码
root@master:~/yaml# cd ingress
root@master:~/yaml/ingress# vim ingress-test.yaml
----------------------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"  # 强制HTTP→HTTPS
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"  # 增强HTTPS重定向
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"  # 后端连接超时
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"     # 后端读超时
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"     # 后端协议
spec:
  ingressClassName: nginx
  tls:  # 关联SSL Secret
  - hosts: [www.test.com]
    secretName: test-tls
  rules:
  - host: www.test.com
    http:
      paths:
      - pathType: Prefix
        backend:
          path: "/"
          service:
            name: nginx-service
            port:
              number: 80

-----------------------------
# 更新资源配置文件
root@master:~/yaml/ingress# kubectl apply -f  ingress-test.yaml

# 遇到了报错 Error from server (BadRequest): error when creating "ingress-test.yaml": Ingress in version "v1" cannot be handled as a Ingress: strict decoding error: unknown field "spec.rules[0].http.paths[0].backend.path"
# Kubernetes Ingress v1 版本中已移除 spec.rules[0].http.paths[0].backend.path 字段,在 YAML 中使用了这个不存在的字段,导致 API Server 无法解析

# 修改yaml文件
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: test-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"        # 强制HTTP→HTTPS
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"  # 增强HTTPS重定向(优先级更高)
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"  # 后端连接超时(秒)
    nginx.ingress.kubernetes.io/proxy-read-timeout: "60"     # 后端读超时(秒)
    nginx.ingress.kubernetes.io/backend-protocol: "HTTP"     # 后端协议(若后端是HTTPS则改HTTPS)
spec:
  ingressClassName: nginx  # 指定Ingress Controller(需确保集群中存在nginx类的IngressClass)
  tls:                     # SSL证书配置
  - hosts:
    - www.test.com
    secretName: test-tls   
  rules:
  - host: www.test.com
    http:
      paths:
      - path: /            # 新增:匹配所有路径(与pathType: Prefix配合)
        pathType: Prefix   
        backend:
          service:         # 移除了无效的backend.path字段
            name: nginx-service  # 目标Service名称
            port:
              number: 80         # 目标Service端口
# 验证ingress
root@master:~/yaml/ingress# kubectl get ingress
NAME           CLASS   HOSTS          ADDRESS                                           PORTS     AGE
test-ingress   nginx   www.test.com   192.168.121.101,192.168.121.102,192.168.121.103   80, 443   24m

验证https://www.test.com

bash 复制代码
root@master1:~/yaml/ingress# curl -k https://www.test.com
<!DOCTYPE html>
<html>
<head><title>Welcome</title></head>
<body><h1>welcome to test.com</h1></body>
</html>

11 Prometheus + Grafana 监控(无变化)

11.1 部署node-exporter

11.1.1 前置准备(不操作继承之前harbor镜像仓库内容)
bash 复制代码
# 创建监控命名空间
root@master:~# kubectl create ns monitoring
# docker拉取node-exporter镜像并上传harbor镜像仓库减少拉取时间
root@master:~# docker pull prom/node-exporter:v1.8.1
root@master:~# docker tag prom/node-exporter:v1.8.1 harbor.test.com/monitoring/node-exporter:v1.8.1 
root@master:~# docker push harbor.test.com/monitoring/node-exporter:v1.8.1 

# docker拉取prom/prometheus镜像并上传harbor镜像仓库减少拉取时间
root@master:~# docker pull prom/prom/prometheus:v2.53.1
root@master:~# docker tag prom/prom/prometheus:v2.53.1 harbor.test.com/monitoring/prometheus:v2.53.1
root@master:~# docker push harbor.test.com/monitoring/prometheus:v2.53.1

# docker拉取grafana/grafana镜像并上传harbor镜像仓库减少拉取时间
root@master:~# docker pull prom/grafana/grafana:v11.2.0
root@master:~# docker tag grafana/grafana:v11.2.0 harbor.test.com/monitoring/grafana:v11.2.0
root@master:~# docker push harbor.test.com/monitoring/grafana:v11.2.0

# docker拉取blackbox-exporter镜像并上传harbor镜像仓库减少拉取时间
root@master:~# docker pull prom/blackbox-exporter:v0.24.0
root@master:~# docker tag prom/blackbox-exporter:v0.24.0 harbor.test.com/monitoring/blackbox-exporter:v0.24.0
root@master:~# docker push harbor.test.com/monitoring/blackbox-exporter:v0.24.0
11.1.2 创建yaml并部署node-exporter
bash 复制代码
root@master:~/yaml# mkdir monitoring
root@master:~/yaml# cd monitoring
root@master:~/yaml/monitoring# vim node-exporter.yaml
-----------------------------
apiVersion: apps/v1
kind: DaemonSet   
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      tolerations:
      - key: "node-role.kubernetes.io/master"  # 对应 Master 污点的 Key
        operator: "Exists"                            # 只要 Key 存在即可(无需匹配 Value)
        effect: "NoSchedule"                          # 对应 Master 污点的 Effect
      hostNetwork: true  # 访问主机网络
      hostPID: true
      imagePullSecrets: [{ name: harbor-registry-secret }]
      containers:
      - name: node-exporter
        image: harbor.test.com/monitoring/node-exporter:v1.8.1 
        args:
        - --path.procfs=/host/proc
        - --path.sysfs=/host/sys
        - --collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($|/)
        securityContext:
          privileged: true
        volumeMounts:
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: sys
        hostPath:
          path: /sys
      - name: rootfs
        hostPath:
          path: /
---
# NodeExporter Service(供Prometheus抓取)
apiVersion: v1
kind: Service
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  selector:
    app: node-exporter
  ports:
  - name: metrics
    port: 9100
    targetPort: 9100
  type: ClusterIP

# 更新配置文件
root@master:~/yaml/monitoring# kubectl apply -f  node-exporter.yaml
# 验证是否所有节点都持有node-export
root@master:~/yaml/monitoring# kubectl get pods -n monitoring -l app=node-exporter -o wide
NAME                  READY   STATUS    RESTARTS   AGE    IP                NODE     NOMINATED NODE   READINESS GATES
node-exporter-2lm7j   1/1     Running   0          5m   192.168.121.103   node3    <none>           <none>
node-exporter-cvc2k   1/1     Running   0          5m   192.168.121.101   node1    <none>           <none>
node-exporter-lxf86   1/1     Running   0          5m   192.168.121.100   master   <none>           <none>
node-exporter-q2vtd   1/1     Running   0          5m   192.168.121.102   node2    <none>           <none>

11.2 部署 Prometheus Server

11.2.1 配置 Prometheus RBAC 权限
bash 复制代码
root@master:~/yaml/monitoring# vim prometheus-rbac.yaml
-----------------------------
# 允许Prometheus访问K8s资源和指标
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/metrics
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources:
  - configmaps
  verbs: ["get"]
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics", "/metrics/cadvisor"]
  verbs: ["get"]
---
# 绑定集群角色到monitoring命名空间的default账户
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: default
  namespace: monitoring
-------------------------------------
11.2.2 配置 Prometheus 抓取规则
bash 复制代码
root@master:~/yaml/monitoring# vim prometheus-config.yaml
------------------------------------------
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s  
      evaluation_interval: 15s
    scrape_configs:
      # harbo节点
      - job_name: 'harbor-node-exporter'   
        static_configs:
        - targets: ['192.168.121.106:9100']
        
      # lb主备节点
      - job_name: 'lb-node-exporter'
        static_configs:
        - targets: ['192.168.121.104:9100','192.168.121.105:9100']
        
      # 抓取Prometheus自身指标
      - job_name: 'prometheus'
        static_configs:
        - targets: ['localhost:9090']
        
      # 抓取k8s集群节点NodeExporter指标
      - job_name: 'k8s-node-exporter'
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names: ['monitoring']
        relabel_configs:
        - source_labels: [__meta_kubernetes_service_label_app]
          regex: node-exporter
          action: keep
        - source_labels: [__meta_kubernetes_endpoint_port_name]
          regex: metrics
          action: keep
          
      # 抓取Blackbox Exporter(页面监控)指标
      - job_name: 'blackbox-exporter'
        metrics_path: /probe
        params:
          module: [http_2xx]  # 检测HTTP 200状态
        kubernetes_sd_configs:
        - role: endpoints
          namespaces:
            names: ['monitoring']
        relabel_configs:
        - source_labels: [__meta_kubernetes_service_label_app]
          regex: blackbox-exporter
          action: keep
        - source_labels: [__address__]
          target_label: __param_target
        - source_labels: [__param_target]
          target_label: instance
        - target_label: __address__
          replacement: blackbox-exporter.monitoring.svc:9115  # Blackbox Service地址
        - source_labels: [instance]
          regex: (.*)
          target_label: target
          replacement: ${1}
          
      # 抓取K8s集群组件指标(APIServer)
      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
        - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          regex: default;kubernetes;https
          action: keep
11.2.3 部署 Prometheus Deployment + Service
bash 复制代码
root@master:~/yaml/monitoring# vim prometheus-deployment.yaml
--------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus
  namespace: monitoring
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      imagePullSecrets: [{ name: harbor-registry-secret }]
      containers:
      - name: prometheus
        image: harbor.test.com/monitoring/prometheus:v2.53.1 
        args:
        - --config.file=/etc/prometheus/prometheus.yml
        - --storage.tsdb.path=/prometheus
        - --web.console.libraries=/usr/share/prometheus/console_libraries
        - --web.console.templates=/usr/share/prometheus/consoles
        ports:
        - containerPort: 9090
        volumeMounts:
        - name: prometheus-config
          mountPath: /etc/prometheus
        - name: prometheus-storage
          mountPath: /prometheus
        resources:
          limits:
            cpu: 1000m
            memory: 1Gi
          requests:
            cpu: 500m
            memory: 512Mi
      volumes:
      - name: prometheus-config
        configMap:
          name: prometheus-config
      - name: prometheus-storage
        emptyDir: {}  
---
# Prometheus Service
apiVersion: v1
kind: Service
metadata:
  name: prometheus
  namespace: monitoring
spec:
  selector:
    app: prometheus
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30090  # 固定NodePort端口
  type: NodePort
11.2.4 应用prometheus所有配置
bash 复制代码
root@master:~/yaml/monitoring# kubectl apply -f .
# 验证Prometheus Pod运行状态
root@master:~/yaml/monitoring# kubectl get pods -n monitoring -l app=prometheus
NAME                          READY   STATUS    RESTARTS   AGE
prometheus-8469769d7c-vrq9n   1/1     Running   0          3h6m

访问验证192.168.121.101:30090

11.3 部署 Grafana

11.3.1 部署 Grafana Deployment + Service
bash 复制代码
root@master:~/yaml/monitoring# vim grafana-deployment.yaml
------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana
  namespace: monitoring
  labels:
    app: grafana
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      imagePullSecrets: [{ name: harbor-registry-secret }]
      containers:
      - name: grafana
        image: harbor.test.com/monitoring/grafana:11.2.0
        ports:
        - containerPort: 3000
        env:
        - name: GF_SECURITY_ADMIN_PASSWORD
          value: "admin123"  # Grafana管理员密码
        - name: GF_USERS_ALLOW_SIGN_UP
          value: "false"  
        volumeMounts:
        - name: grafana-storage
          mountPath: /var/lib/grafana
        resources:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 200m
            memory: 256Mi
      volumes:
      - name: grafana-storage
        emptyDir: {} 
---
# Grafana Service(NodePort暴露)
apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: monitoring
spec:
  selector:
    app: grafana
  ports:
  - port: 3000
    targetPort: 3000
    nodePort: 30030  # 固定NodePort端口
  type: NodePort
-----------------------------------
11.3.2 应用 Grafana 配置
bash 复制代码
root@master:~/yaml/monitoring# kubectl apply -f grafana-deployment.yaml
# 验证Grafana Pod运行
root@master:~/yaml/monitoring# kubectl get pods -n monitoring -l app=grafana
NAME                       READY   STATUS    RESTARTS   AGE
grafana-844d4f8bdb-58wsc   1/1     Running   0          3h18m

访问验证192.168.121.101:30030

11.3.3 配置 Grafana 数据源
  1. 登录 Grafana 后,点击左侧 连接->数据源->添加新数据源;
  1. 选择Prometheus,配置 URL 为:http://prometheus.monitoring.svc:9090(K8s 内部 Service 地址);

  2. 点击Save & test,提示Successfully queried the Prometheus API即配置成功。

11.3.4 导入 Grafana 仪表盘
  1. 点击左侧仪表板->左侧新建导入
  1. 输入仪表盘 ID,点击「Load」:
  • 节点状态监控:1860(Node Exporter Full,节点 CPU / 内存 / 磁盘);
  • K8s 集群监控:7249(Kubernetes Cluster Monitoring,集群组件);
  • 页面可用性监控:9965(Blackbox Exporter Dashboard,后续部署 Blackbox 后生效);
11.4 部署Blackbox Exporter(页面可用性监控)

用于检测https://www.test.com页面是否存活(返回 200 状态码),并将指标推送给 Prometheus:

bash 复制代码
root@master:~/yaml/monitoring# vim blackbox-exporter.yaml
--------------------------------
# Blackbox配置:检测HTTP 200状态,跳过自签SSL证书验证
apiVersion: v1
kind: ConfigMap
metadata:
  name: blackbox-config
  namespace: monitoring
data:
  blackbox.yml: |
    modules:
      http_2xx:
        prober: http
        timeout: 5s
        http:
          valid_status_codes: [200]
          tls_config:
            insecure_skip_verify: true  # 适配自签SSL证书
          follow_redirects: true  
---
# Blackbox Deployment(核心:添加启动参数指定配置文件)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blackbox-exporter
  namespace: monitoring
  labels:
    app: blackbox-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: blackbox-exporter
  template:
    metadata:
      labels:
        app: blackbox-exporter
    spec:
      containers:
      - name: blackbox-exporter
        image: harbor.test.com/monitoring/blackbox-exporter:v0.24.0
        args:  # 关键:显式指定配置文件路径
        - --config.file=/etc/blackbox_exporter/blackbox.yml
        ports:
        - containerPort: 9115
        volumeMounts:
        - name: blackbox-config
          mountPath: /etc/blackbox_exporter  # 挂载ConfigMap到该目录
          readOnly: true 
        securityContext:
          runAsUser: 0
          runAsGroup: 0
      volumes:
      - name: blackbox-config
        configMap:
          name: blackbox-config
          # 显式指定挂载的文件(确保文件名和路径匹配)
          items:
          - key: blackbox.yml
            path: blackbox.yml
---
# Blackbox Service
apiVersion: v1
kind: Service
metadata:
  name: blackbox-exporter
  namespace: monitoring
  labels:
    app: blackbox-exporter
spec:
  selector:
    app: blackbox-exporter
  ports:
  - port: 9115
    targetPort: 9115
  type: ClusterIP
---
# 页面宕机告警规则(PrometheusRule)
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-alert-rules
  namespace: monitoring
data:
  alert-rules.yml: |
    groups:
    - name: test-page-alerts
      rules:
      - alert: TestPageDown
        expr: probe_success{target="https://www.test.com"} == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "www.test.com 页面不可用"
          description: "页面https://www.test.com已连续1分钟返回非200状态码,请检查服务!"

应用配置

bash 复制代码
root@master:~/yaml/monitoring# kubectl apply -f  blackbox-exporter.yaml
# 检查是否正常启动
root@master:~/yaml/monitoring# kubectl get pod -n monitoring
NAME                                READY   STATUS    RESTARTS   AGE
blackbox-exporter-bfbd654b9-qdnfz   1/1     Running   0          1m

验证监控

12 部署Jenkins 实现 CI/CD(无变沿用之前部署的jenkins数据)

nfs服务器留存之前的jenkisn持久化数据,复制原数据到新建立的pod持久化数据目录进行覆盖

12.1 前置准备

  1. 创建jenkisn 命名空间以及yaml存放目录
bash 复制代码
root@master:~/yaml# kubectl create namespace jenkins
root@master:~/yaml# mkdir jenkins
  1. 在nfs服务器创建共享目录

    bash 复制代码
    root@harbor:~# mkdir -p /data/nfs/jenkins-data
    root@harbor:~# chmod -R 777 /data/nfs/jenkins-data
    root@harbor:~# echo "/data/nfs/jenkins-data *(rw,sync,no_root_squash,no_subtree_check)" >> /etc/exports
    root@harbor:~# exportfs -r
    root@harbor:~# showmount -e localhost
  2. 下载jenkins:lts镜像并上传harbo镜像仓库(此步骤省略沿用harbo已有镜像)

bash 复制代码
root@master:~/yaml# docker pull jenkins/jenkins:lts    
root@master:~/yaml# docker tag jenkins/jenkins:lts harbor.test.com/jenkins/jenkins:lts
root@master:~/yaml# docker push harbor.test.com/jenkins/jenkins:lts

后期执行构建时遇到的一些问题

  1. Jenkins容器内无docker导致构建镜像失败

  2. 镜像构建成功后,上传harbor仓库失败,原因是没有配置ip 域名映射,docker login时域名解析不到ip(在yaml文件内新增hostAliases域名解析解决),没有配置harbor的ca证书

  3. jenkins容器内无kubectl导致更新资源配置文件失败

    解决方案:基于harbor.test.com/jenkins/jenkins:lts 基础镜像进行增量配置持久化生效

    bash 复制代码
    root@master:~/yaml/jenkins# mkdir dockerfile
    root@master:~/yaml/jenkins# cd dockerfile
    # 准备下载好的docker安装包
    root@master:~/yaml/jenkins/dockerfile# curl -L --retry 3 --connect-timeout 10  https://mirrors.aliyun.com/docker-ce/linux/static/stable/x86_64/docker-20.10.24.tgz -o docker.tgz
    root@master:~/yaml/jenkins/dockerfile# tar xzf docker.tgz
    # 复制kubectl二进制文件
    root@master:~/yaml/jenkins/dockerfile# cp /usr/bin/kubectl /root/yaml/jenkins/dockerfile/
    # 复制harbor的ca证书
    root@master:~/yaml/jenkins/dockerfile# cp /etc/docker/certs.d/harbor.test.com/harbor-ca.crt /root/yaml/jenkins/dockerfile/
    root@master:~/yaml/jenkins/dockerfile# ls
    kubectl  docker  harbor-ca.crt
    # 编写dockerfile
    root@master:~/yaml/jenkins/dockerfile# vim Dockerfile
    ----------------------------
    FROM harbor.test.com/jenkins/jenkins:lts
    USER root
    WORKDIR /usr/local/src
    RUN echo "nameserver 223.5.5.5" > /etc/resolv.conf \
        && echo "nameserver 8.8.8.8" >> /etc/resolv.conf
    # 创建 harbor 证书目录并复制证书
    RUN mkdir -p /etc/docker/certs.d/harbor.test.com
    COPY harbor-ca.crt /etc/docker/certs.d/harbor.test.com/
    COPY docker /usr/local/src/
    COPY docker/docker /usr/bin/
    RUN chmod +x /usr/bin/docker
    RUN export DOCKER_API_VERSION=1.40
    COPY kubectl /usr/local/bin/
    RUN  chmod +x /usr/local/bin/kubectl
    USER jenkins
    ---------------------------------
    # 执行构建
    root@master:~/yaml/jenkins/dockerfile# docker build -t jenkins:v1 .
    # 上传至harbor镜像仓库
    root@master:~/yaml/jenkins/dockerfile# docker tag jenkins:v1 harbor.test.com/jenkins/jenkins:v1
    root@master:~/yaml/jenkins/dockerfile# docker push harbor.test.com/jenkins/jenkins:v1

12.2 配置 jenkins RBAC 权限

bash 复制代码
root@master:~/yaml/jenkins# vim jenkins-rbac.yaml
--------------------------
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  namespace: jenkins  
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
  - apiGroups: ["apps"]  
    resources: ["deployments"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] 
  - apiGroups: [""] 
    resources: ["configmaps"] 
    verbs: ["get", "list", "create", "update", "patch", "delete"] 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: jenkins
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

-------------------------------------

12.3 创建NFS 的deploy

bash 复制代码
root@master:~/yaml/jenkins# vim nfs-provisioner-deploy.yaml
------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
      - name: nfs-client-provisioner
        image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2  
        volumeMounts:
        - name: nfs-client-root
          mountPath: /persistentvolumes
        env:
        - name: PROVISIONER_NAME  # 供应器名称(后续StorageClass需引用)
          value: "k8s-sigs.io/nfs-subdir-external-provisioner"
        - name: NFS_SERVER  # NFS服务器IP
          value: "192.168.121.106"
        - name: NFS_PATH     # NFS共享目录
          value: "/data/nfs/jenkins-data"
      volumes:
      - name: nfs-client-root
        nfs:
          server: 192.168.121.106  # 同上NFS服务器IP
          path: /data/nfs/jenkins-data  # 同上NFS共享目录
 ---------------------------------------------------

12.4 更新 RBAC和deploy资源配置文件

bash 复制代码
root@master:~/yaml/jenkins# kubectl apply -f  jenkins-rbac.yaml
root@master:~/yaml/jenkins# kubectl apply -f nfs-provisioner-deploy.yaml
# 验证状态
root@master:~/yaml/jenkins# kubectl get pods -n jenkins | grep nfs-client-provisioner
nfs-client-provisioner-8465bdd9f5-j5kqx   1/1     Running   0          12m

12.5 创建动态存储类

bash 复制代码
root@master:~/yaml/jenkins# vim nfs-storageclass.yaml 
--------------------------
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storageclass
  namespace: jenkins
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner  
parameters:
  archiveOnDelete: "false" 
reclaimPolicy: Delete 
volumeBindingMode: Immediate

# 更新资源配置文件
kubectl apply -f nfs-storageclass.yaml
# 验证是否创建成功
root@master:~/yaml/jenkins# kubectl get sc -n jenkins
NAME               PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storageclass   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  14m

12.6 创建pvc

bash 复制代码
root@master:~/yaml/jenkins# vim jenkins-nfs-pvc.yaml 
----------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: jenkins-pvc
  namespace: jenkins
spec:
  accessModes:
    - ReadWriteOnce  
  storageClassName: nfs-storageclass 
  resources:
    requests:
      storage: 10Gi
 ----------------------------
 
 # 更新资源配置文件
 root@master:~/yaml/jenkins# kubectl apply -f jenkins-nfs-pvc.yaml 
 # 验证绑定状态
 root@master:~/yaml/jenkins# kubectl get pvc -n jenkins
NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS       AGE
jenkins-pvc   Bound    pvc-6ae87406-cac2-4c72-805f-2ec254dbd545   10Gi       RWO            nfs-storageclass   16m

12.7 部署 Jenkins Deployment + Service

bash 复制代码
root@master:~/yaml/jenkins# vim jenkins-deployment.yaml
----------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins
  namespace: jenkins
  labels:
    app: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jenkins
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      hostAliases:
      - ip: "192.168.121.106"
        hostnames:
        - "harbor.test.com"
      - ip: "192.168.121.188"
        hostnames:
        - "www.test.com"
      imagePullSecrets:
      - name: harbor-registry-secret 
      serviceAccountName: nfs-client-provisioner
      containers:
      
      - name: jenkins
        image: harbor.test.com/jenkins/jenkins:v1 
        ports:
        - containerPort: 8080
        - containerPort: 50000
        env:
        - name: HTTP_PROXY
          value: "http://192.168.121.1:7890" 
        - name: HTTPS_PROXY
          value: "http://192.168.121.1:7890" 
        - name: NO_PROXY
          value: "localhost,127.0.0.1,10.0.0.0/8,192.168.0.0/16,.svc,.cluster.local,192.168.121.188,www.test.com" 
        - name: JAVA_OPTS
          value: "-Duser.timezone=Asia/Shanghai -Dhudson.model.DirectoryBrowserSupport.CSP="
        - name: JENKINS_OPTS
          value: "--prefix=/jenkins"
        volumeMounts:
        - name: jenkins-data
          mountPath: /var/jenkins_home  # Jenkins数据目录
        - name: docker-sock 
          mountPath: /var/run/docker.sock
        resources:
          limits:
            cpu: 2000m
            memory: 2Gi
          requests:
            cpu: 1000m
            memory: 1Gi
        securityContext:
          runAsUser: 0 
          privileged: true
      volumes:
      - name: jenkins-data
        persistentVolumeClaim:
          claimName: jenkins-pvc
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock
          type: Socket
---
# Jenkins Service
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: jenkins
spec:
  selector:
    app: jenkins
  ports:
  - name: web
    port: 8080
    targetPort: 8080
    nodePort: 30080
  - name: agent
    port: 50000
    targetPort: 50000
    nodePort: 30081
  type: NodePort

更新资源配置文件

bash 复制代码
root@master:~/yaml/jenkins# kubectl apply -f  jenkins-deployment.yaml
# 验证pod是否创建成功
root@master:~/yaml/jenkins# kubectl get pod -n jenkins
NAME                                      READY   STATUS    RESTARTS   AGE
jenkins-58b5dfd8b-b6cd9                   1/1     Running   0          20m
nfs-client-provisioner-8465bdd9f5-j5kqx   1/1     Running   0          20m

12.8 初始化jenkins

12.8.1 获取jenkins初始密码

在nfs服务器共享目录查看init文件

bash 复制代码
root@harbor:~# cat /data/nfs/jenkins-data/jenkins-jenkins-pvc-pvc-6ae87406-cac2-4c72-805f-2ec254dbd545/secrets/initialAdminPassword 
6a8f32add9b8424face397b782d9b282
12.8.2 访问jenkins Web页面并初始化

访问地址http://192.168.121.101:30080/jenkins

输入初始密码

安装推荐插件

等在安装完成

创建管理员用户

进入首页

12.8.3 安装必备插件
  • Kubernetes Plugin(Jenkins 与 K8s 集成);

  • Docker Plugin(构建 Docker 镜像);

  • Docker Pipeline Plugin(Pipeline 中操作 Docker);

  • Git Plugin(拉取 Git 代码);

  • Credentials Binding Plugin(管理凭证);

  • Pipeline Utility Steps(Pipeline 辅助步骤);

    安装完成后重启 Jenkins(系统管理→重启 Jenkins)。

12.8.4 配置jenkins全局凭证

添加harbor凭证

类型 :Username with password

用户名: admin

密码:123456

id: harbor-credential

描述: harbor仓库凭证

保存

12.8.5 编写CI/CD流水线
  1. 准备Github代码仓库,配置ssh免密

  2. 建立本地目录

bash 复制代码
root@master:~/yaml# mkdir github
root@master:~/yaml# cd github
  1. Git身份设置
bash 复制代码
root@master:~/yaml/github# git config --global user.name "chenjun"
root@master:~/yaml/github# git config --global user.email "3127103271@qq.com"
root@master:~/yaml/github# git config --global color.ui true
root@master:~/yaml/github# git config --list
user.name=chenjun
user.email=3127103271@qq.com
color.ui=true

# 初始化
root@master:~/yaml# git init
  1. 克隆github远程仓库到本地
bash 复制代码
root@master:~/yaml/github# git clone git@github.com:cj3127/jenkins-test.git
Cloning into 'jenkins-test'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (3/3), done.
root@master:~/yaml/github# ls
jenkins-test
root@master:~/yaml/github# cd jenkins-test
  1. 编写Dockerfile
bash 复制代码
root@master:~/yaml/github/jenkins-test# vim Dockerfile

# 基于Harbor的nginx:latest镜像
FROM harbor.test.com/library/nginx:latest
  1. 创建nginx相关配置
bash 复制代码
root@master:~/yaml/github/jenkins-test# mkdir k8s
root@master:~/yaml/github/jenkins-test# cd k8s
# 复制原始deploy配置文件
root@master:~/yaml/github/jenkins-test/k8s# cp /root/yaml/deploy/nginx-deployment.yaml /root/yaml/github/jenkins-test/k8s/
# 复制原始svc配置文件
root@master:~/yaml/github/jenkins-test/k8s# cp /root/yaml/deploy/nginx-service.yaml /root/yaml/github/jenkins-test/k8s/
# 复制nginx-cm配置文件
root@master:~/yaml/github/jenkins-test/k8s# cp /root/yaml/configmap/nginx-welcome-cm.yaml /root/yaml/github/jenkins-test/k8s/
  1. 编写Jenkinsfile
bash 复制代码
root@master:~/yaml/github/jenkins-test# vim Jenkinsfile
pipeline {
    agent any  
    environment {
        HARBOR_ADDR = "harbor.test.com"
        HARBOR_PROJECT = "library"
        IMAGE_NAME = "nginx"
        IMAGE_TAG = "${BUILD_NUMBER}" 
        FULL_IMAGE_NAME = "${HARBOR_ADDR}/${HARBOR_PROJECT}/${IMAGE_NAME}:${IMAGE_TAG}"
        K8S_NAMESPACE = "default"
    }
    stages {

        stage("构建Docker镜像") {
            steps {
                echo "构建Docker镜像"
                sh "docker build -t ${FULL_IMAGE_NAME} ."
            }
        }

        stage("推送镜像到Harbor") {
            steps {
                echo "推送镜像到Harbor"
                withCredentials([usernamePassword(credentialsId: 'harbor-credential', passwordVariable: 'HARBOR_PWD', usernameVariable: 'HARBOR_USER')]) {
                    sh "docker login ${HARBOR_ADDR} -u ${HARBOR_USER} -p ${HARBOR_PWD}"
                    sh "docker push ${FULL_IMAGE_NAME}"
                    sh "docker logout ${HARBOR_ADDR}"
                }
            }
        }

        stage("部署到K8s集群") {
            steps {
                echo "部署到K8s集群"
                sh "kubectl apply -f k8s/nginx-welcome-cm.yaml -n ${K8S_NAMESPACE}"
                sh "sed -i 's|harbor.test.com/library/nginx:latest|${FULL_IMAGE_NAME}|g' k8s/nginx-deployment.yaml"
                sh "kubectl apply -f k8s/nginx-deployment.yaml -n ${K8S_NAMESPACE}"
                sh "kubectl rollout status deployment/nginx-deployment -n ${K8S_NAMESPACE}"
            }
        }

        stage("验证服务") {
            steps {
                echo "验证服务可用性"
                sh "curl -k https://www.test.com -o /dev/null -w '%{http_code}' | grep 200"
            }
        }
    }
    post {
        success {
            echo "流水线执行成功"
        }
        failure {
            echo "流水线执行失败"
        }
        always {
            sh "docker rmi ${FULL_IMAGE_NAME} || true"
        }
    }
}
12.8.6 提交本地文件至Github代码仓库
bash 复制代码
root@master:~/yaml/github/jenkins-test# git add Dockerfile Jenkinsfile k8s
root@master:~/yaml/github/jenkins-test# git commit -m "提交了Dockerfile Jenkinsfile k8s"
root@master:~/yaml/github/jenkins-test# git push origin main 

12.9 创建jenkins流水线任务

  1. 首页新建任务
  1. 输入任务名称,选择流水线,确定
  1. 流水线配置
  • 定义:Pipeline script from SCM;
  • SCM:Git;
  • 仓库 URL: Git 仓库地址;
  • 分支:*/main
  • 脚本路径:Jenkinsfile

应用保存配置

修改cm配置的主页内容验证是否更新成功

bash 复制代码
root@master:~/yaml/github/jenkins-test/k8s# vim nginx-welcome-cm.yaml
root@master:~/yaml/github/jenkins-test/k8s# git add nginx-welcome-cm.yaml 
root@master:~/yaml/github/jenkins-test/k8s# git commit -m "修改了主页内容"
[main 35cd872] 修改了主页内容
 1 file changed, 1 insertion(+), 1 deletion(-)
root@master:~/yaml/github/jenkins-test/k8s# git push origin main 
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 2 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 399 bytes | 399.00 KiB/s, done.
Total 4 (delta 3), reused 0 (delta 0)
remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
To github.com:cj3127/jenkins-test.git
   73e5b7c..35cd872  main -> main
root@master:~/yaml/github/jenkins-test/k8s# 
  1. 执行构建任务

查看构建控制台输出内容

查看pod状态,滚动更新

更新完成后通过域名www.test.com访问测试查看页面内容

相关推荐
༺๑Tobias๑༻1 小时前
国内可用的DOCKER 镜像源
运维·docker·容器
杰克逊的日记1 小时前
k8s是怎么管理GPU集群的
java·容器·kubernetes·gpu
JavaEdge.1 小时前
零距离拆解银行司库系统(TMS)的微服务设计与实践
微服务·云原生·架构
黛琳ghz1 小时前
极速云原生:openEuler之Redis与Nginx部署性能实战
redis·nginx·云原生·操作系统·压力测试·openeuler·服务器部署
忍冬行者1 小时前
k8s的ETCD故障处理
容器·kubernetes·etcd
2301_810746311 小时前
CKA冲刺40天笔记 - day23 Kubernetes RBAC Explained - Role Based Access Control
笔记·kubernetes
拾忆,想起1 小时前
Dubbo 监控数据采集全链路实战:构建微服务可观测性体系
前端·微服务·云原生·架构·dubbo
-大头.2 小时前
2025 Maven终极实战:AI与云原生构建新范式
人工智能·云原生·maven
Yyyy4822 小时前
k8s部署wordpress
云原生·容器·kubernetes