k8s运维-configmap和secret(4)

configmap

configmap的作用

  1. 实现配置与镜像分离

    1. 传统方式:在 Docker 时代,配置信息(如配置文件、环境变量)通常被打包进容器镜像中。这会导致不同环境(开发、测试、生产)需要构建不同的镜像,非常不灵活。

    2. K8s 方式:使用 ConfigMap,配置数据存储在 etcd 中,并由 Kubernetes 统一管理。容器镜像变得通用,同一个镜像可以通过注入不同的 ConfigMap 来轻松适应不同环境的配置

  2. 统一管理配置数据

    1. 你可以将分散的配置(如配置文件、命令行参数、环境变量等)集中到一个 ConfigMap 对象中进行管理。

    2. 这大大提高了配置的可维护性和可见性,通过 kubectl命令可以很方便地查看、创建、更新和删除配置。

  3. 动态更新配置(在一定条件下)

    1. 当 ConfigMap 的内容被修改后,Kubernetes 可以将其更新到挂载了该 ConfigMap 的 Pod 中。

    2. 注意:通过环境变量方式使用的配置无法被动态更新,Pod 重启后才会生效。

    3. 通过卷(Volume) 方式挂载的配置文件可以被自动更新。更新后,kubelet 会周期性地检查并同步更新Pod中的文件。至于容器内的应用程序是否会重新加载这个更新后的配置,取决于应用自身的逻辑(例如,许多应用如 Nginx 需要发送信号才能重载配置,有些应用如微服务可能需要重启)。

  4. 支持多种格式的数据

    1. 整个配置文件:例如 nginx.confserver.properties等。

    2. 单个配置项:例如 LOG_LEVEL=debugDB_HOST=mysql.prod.svc

    3. JSON/XML 代码段:任何键值对形式或文本形式的数据都可以。

创建 ConfigMap 的常用方法

  1. 通过命令行 kubectl create configmap

    1. 从目录创建:kubectl create configmap my-config --from-file=./config-files/

    2. 从文件创建:kubectl create configmap my-config --from-file=./my-app.properties

    3. 从字面值创建:kubectl create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2

  2. 通过 YAML 清单文件

命令行创建

配置文件如下:

复制代码
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        
        root /var/www/html;
        
        # Add index.php to the list if you are using PHP
        index index.html index.htm index.nginx-debian.html;
        
        server_name _;
        
        location / {
                # First attempt to serve request as file, then
                # as directory, then fall back to displaying a 404.
                proxy_pass http://192.168.44.131;
                proxy_set_header Host $host;
                proxy_set_header X-Real-IP $remote_addr;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header X-Forwarded-Proto $scheme;
                try_files $uri $uri/ =404;
        }
} 

创建configmap:

复制代码
root@k8s-master:~/k8s/configmap# kubectl create configmap nginx-cg --from-file ./default.conf
configmap/nginx-cg created
root@k8s-master:~/k8s/configmap# kubectl get cm
NAME               DATA   AGE
kube-root-ca.crt   1      28d
nginx-cg           1      62s

#查看configmap内容
[root@master-01 ~]# kubectl describe cm nginx-cm 
Name:         nginx-cm
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
default.conf:
----
server {
    listen 80;
    server_name www.czxqq.cn;
    location / {
        root /var/www/html;
        index index.html;

    }

Events:  <none>

资源配置清单

如何编写资源配置清单

复制代码
root@k8s-master:~/k8s/configmap# kubectl get cm nginx-cg -o yaml > nginx-cm.yaml
root@k8s-master:~/k8s/configmap# ls
default.conf  nginx-cm.yaml
root@k8s-master:~/k8s/configmap# vim nginx-cm.yaml 
root@k8s-master:~/k8s/configmap# cat nginx-cm.yaml 
apiVersion: v1
data:
  default.conf: |
    server {
        listen 80;
        server_name www.czxqq.cn;
        location / {
            root /var/www/html;
            index index.html;

        }
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2026-03-14T01:42:29Z"
  name: nginx-cg
  namespace: default
  resourceVersion: "150839"
  uid: bbd4fab3-3739-44f4-8831-13a662ad16ab

#使用资源配置清单创建资源
root@k8s-master:~/k8s/configmap# kubectl apply -f  nginx-cm.yaml 
configmap/nginx-cg created

(此时将上述内容中的时间戳删掉,就是一个完整的资源配置清单)

复制代码
apiVersion: v1
data:
  default.conf: |
    server {
        listen 80;
        server_name www.czxqq.cn;
        location / {
            root /var/www/html;
            index index.html;

        }
    }
kind: ConfigMap
metadata:
  name: nginx-cg
  namespace: default

关联configmap

参考文档:https://kubernetes.io/zh-cn/docs/concepts/configuration/configmap/

下述以nginx为例

复制代码
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-dp
  template:
    metadata:
      labels:
        app: nginx-dp
    spec:
      containers:
      - name: nginx-dp
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - name: nginx-port
          containerPort: 80

        volumeMounts:
        - name: nginx-vm
          mountPath: /etc/nginx/conf.d/ 

      volumes:
      - name: nginx-vm
        configMap:
          name: nginx-cm

---
apiVersion: v1
kind: Service # 类型是 Service
metadata:
  name: nginx-svc # Service 的名称
spec:
  selector:
    app: nginx-dp # 通过此选择器找到要管理的 Pod(必须与 Deployment 中的 Pod 标签匹配)
  ports:
    - protocol: TCP
      port: 80        # Service 自身的端口
      targetPort: nginx-port  # 后端 Pod 的端口(与 containerPort 一致)或端口名称
  type: ClusterIP

查询svc和configmap与pod是否关联成功

复制代码
root@k8s-master:~/k8s/configmap# kubectl apply -f nginx-dp-cm.yaml 
deployment.apps/nginx-dp unchanged
service/nginx-svc created
root@k8s-master:~/k8s/configmap# #验证svc是否与pod关联成功
root@k8s-master:~/k8s/configmap# kubectl describe svc nginx-svc 
Name:                     nginx-svc
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=nginx-dp
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.103.61
IPs:                      10.105.103.61
Port:                     <unset>  80/TCP
TargetPort:               nginx-port/TCP
Endpoints:                
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>
root@k8s-master:~/k8s/configmap# kubectl get pod
NAME                        READY   STATUS              RESTARTS   AGE
nginx-dp-5b58985f67-blv45   0/1     ContainerCreating   0          7m24s
nginx-dp-5b58985f67-gh8xb   0/1     ContainerCreating   0          7m24s
nginx-dp-5b58985f67-wttcl   0/1     ContainerCreating   0          7m24s

(这里很显然有问题,容器一直没有创建成功,这里我登录到了对应work节点上也拉取了对应镜像,所以此处不存在镜像拉取问题,现在让我们来分析一下问题所在,如下:)

排查思路:

复制代码
root@k8s-master:~/k8s/configmap# kubectl describe  pod nginx-dp-5b58985f67-blv45 
Name:             nginx-dp-5b58985f67-blv45
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-worker-02/192.168.44.202
Start Time:       Sat, 14 Mar 2026 02:09:39 +0000
Labels:           app=nginx-dp
                  pod-template-hash=5b58985f67
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/nginx-dp-5b58985f67
Containers:
  nginx-dp:
    Container ID:   
    Image:          nginx
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/nginx/conf.d/ from nginx-vm (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxthq (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   False 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  nginx-vm:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      nginx-cm
    Optional:  false
  kube-api-access-wxthq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                From               Message
  ----     ------       ----               ----               -------
  Normal   Scheduled    10m                default-scheduler  Successfully assigned default/nginx-dp-5b58985f67-blv45 to kker-02
  Warning  FailedMount  7s (x13 over 10m)  kubelet            MountVolume.SetUp failed for volume "nginx-vm" : configmap "cm" not found


(很显然此处configmap并未找到)

错误原因:此处报错为configmap的名称与nginx-dp的yaml当中的名称不一致导致的,所以修改一下名称即可(希望兄弟们可以自己先排查一下,不要直接就是复制我的yaml文件,那样没有任何效果,这样可以锻炼你的排错思路,毕竟去公司了就没人可以帮助你了)

复制代码
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-dp
  template:
    metadata:
      labels:
        app: nginx-dp
    spec:
      containers:
      - name: nginx-dp
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - name: nginx-port
          containerPort: 80

        volumeMounts:
        - name: nginx-vm
          mountPath: /etc/nginx/conf.d/ 

      volumes:
      - name: nginx-vm
        configMap:
          name: nginx-cg

---
apiVersion: v1
kind: Service # 类型是 Service
metadata:
  name: nginx-svc # Service 的名称
spec:
  selector:
    app: nginx-dp # 通过此选择器找到要管理的 Pod(必须与 Deployment 中的 Pod 标签匹配)
  ports:
    - protocol: TCP
      port: 80        # Service 自身的端口
      targetPort: nginx-port  # 后端 Pod 的端口(与 containerPort 一致)或端口名称
  type: ClusterIP

案例:

要求:

1. 将configmap相关pod全部运行

2. 假设要更改配置文件,应该如何让配置文件失效 (此处我使用的是扩缩容,先将旧的pod缩容到一个,然后生成新的pod,此时除了旧的那个pod, 其余新生成的pod皆都重新读取了最新的配置文件,然后将剩余的一个删除,重新生成)

复制代码
#查询service是否绑定成功
root@k8s-master:~/k8s/configmap# kubectl describe svc nginx-svc 
Name:                     nginx-svc
Namespace:                default
Labels:                   <none>
Annotations:              <none>
Selector:                 app=nginx-dp
Type:                     ClusterIP
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.236.52
IPs:                      10.109.236.52
Port:                     <unset>  80/TCP
TargetPort:               nginx-port/TCP
Endpoints:                10.244.1.53:80,10.244.3.48:80,10.244.3.49:80
Session Affinity:         None
Internal Traffic Policy:  Cluster
Events:                   <none>

#查询svc的ip
root@k8s-master:~/k8s/configmap# kubectl get svc nginx-svc -o wide
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
nginx-svc   ClusterIP   10.109.236.52   <none>        80/TCP    82m   app=nginx-dp

#查询对应状态码
root@k8s-master:~/k8s/configmap# curl -sI 10.109.236.52 |awk 'NR==1{print $2}'
200


#修改配置文件内容,查看状态码是否发生变化
root@k8s-master:~/k8s/configmap# kubectl edit cm nginx-cg 
configmap/nginx-cg edited

#此处修改前端读取文件为404.html。此时状态码应该为404
root@k8s-master:~/k8s/configmap# kubectl describe cm nginx-cg 
Name:         nginx-cg
Namespace:    default
Labels:       <none>
Annotations:  <none>

Data
====
default.conf:
----
server {
    listen 80;
    server_name www.czxqq.cn;
    location / {
        root /var/www/html;
        index 404.html;

    }
}



BinaryData
====

Events:  <none>

#查询pod当中配置是否修改
root@k8s-master:~/k8s/configmap# kubectl exec -it nginx-dp-7bbfc6ff7-jg9ns -- cat /etc/nginx/conf.d/default.conf
server {
    listen 80;
    server_name www.czxqq.cn;
    location / {
        root /var/www/html;
        index 404.html;

    }
}

#查看状态码是否为404(注意:此处nginx服务是不会自动读取配置文件的,所以需要我们手动更新)
#扩缩容pod,以达到更新配置操作
root@k8s-master:~/k8s/configmap# kubectl scale deployment nginx-dp --replicas 1
deployment.apps/nginx-dp scaled
root@k8s-master:~/k8s/configmap# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
nginx-dp-7bbfc6ff7-2hdfl   1/1     Running   0          91m
root@k8s-master:~/k8s/configmap# kubectl scale deployment nginx-dp --replicas 3
deployment.apps/nginx-dp scaled
root@k8s-master:~/k8s/configmap# kubectl get pod
NAME                       READY   STATUS              RESTARTS   AGE
nginx-dp-7bbfc6ff7-2hdfl   1/1     Running             0          91m
nginx-dp-7bbfc6ff7-4zrcj   0/1     ContainerCreating   0          1s
nginx-dp-7bbfc6ff7-m5tzx   0/1     ContainerCreating   0          1s
root@k8s-master:~/k8s/configmap# kubectl delete pod nginx-dp-7bbfc6ff7-2hdfl
pod "nginx-dp-7bbfc6ff7-2hdfl" deleted
root@k8s-master:~/k8s/configmap# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
nginx-dp-7bbfc6ff7-4zrcj   1/1     Running   0          23s
nginx-dp-7bbfc6ff7-l7rgc   1/1     Running   0          3s
nginx-dp-7bbfc6ff7-m5tzx   1/1     Running   0          23s
root@k8s-master:~/k8s/configmap# !curl
curl -sI 10.109.236.52 |awk 'NR==1{print $2}'
404
root@k8s-master:~/k8s/configmap# curl -sI 10.109.236.52 |awk 'NR==1{print $2}'
404
root@k8s-master:~/k8s/configmap# curl -sI 10.109.236.52 |awk 'NR==1{print $2}'
404

实验成功!!!

secret

Secret 的主要作用

1. 安全存储敏感数据

  • 避免在Pod定义或容器镜像中明文存储敏感信息

  • 提供比直接写在YAML中更安全的数据管理方式

2. 与Pod解耦

  • 将敏感数据与应用程序代码分离

  • 便于在不同环境(开发、测试、生产)中使用不同的凭证

3. 访问控制

  • 可以通过RBAC控制对Secret的访问权限

  • 只有授权的用户和服务账户才能访问

资源配置清单

复制代码
# 1. Opaque (默认类型)
# 用于存储任意用户定义的数据
apiVersion: v1
kind: Secret
metadata:
  name: my-secret
type: Opaque
data:
  username: YWRtaW4=  # base64编码的"admin"
  password: MWYyZDFlMmU2N2Rm  # base64编码的"1f2d1e2e67df"
  
# kubernetes.io/dockerconfigjson
apiVersion: v1
kind: Secret
metadata:
  name: my-registry-secret
type: kubernetes.io/dockerconfigjson
data:
  .dockerconfigjson: eyJhdXRocyI6...  # base64 编码的 docker config

# 2. docker-registry
# 用于存储Docker registry认证信息
kubectl create secret docker-registry regcred \
  --docker-server=registry.example.com \
  --docker-username=admin \
  --docker-password=password

# 3. tls
# 用于存储TLS证书和密钥
kubectl create secret tls tls-secret \
  --cert=path/to/cert.crt \
  --key=path/to/cert.key

(由于secret的使用常见并不多,且创建方式与configmap类似,此处只展示其创建方式,不举例说明,下述直接带兄弟们来看看公司当中一般使用到secret的场景)

案例

harbor仓库配置secret

此处默认已经搭建好看harbor私有仓库,如有问题请参考 harbor私有仓库搭建-CSDN博客

1. 宿主机登录harbor

此处登录harbor仓库,就是在.docker目录下创建名为config.json的登陆凭证,因此如果我们将这个凭证作为一个配置文件传入到pod当中,是不是就可以让pod拉取镜像时也去访问这个仓库呢,很显然是可以的

复制代码
root@k8s-worker-02:~# docker login harbor.local
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

#查看harbor生成的凭证
root@k8s-worker-02:~# ls -la ~
total 64
drwx------  6 root root  4096 Mar 14 07:36 .
drwxr-xr-x 20 root root  4096 Feb 22 15:13 ..
-rw-------  1 root root  6082 Mar  2 00:22 .bash_history
-rw-r--r--  1 root root  3106 Oct 15  2021 .bashrc
drwx------  2 root root  4096 Feb 13 06:53 .cache
drwx------  2 root root  4096 Mar 14 07:36 .docker
-rw-------  1 root root    20 Feb 22 12:44 .lesshst
-rw-r--r--  1 root root   161 Jul  9  2019 .profile
drwx------  3 root root  4096 Feb 11 14:49 snap
drwx------  2 root root  4096 Feb 11 14:49 .ssh
-rw-r--r--  1 root root     0 Feb 11 07:39 .sudo_as_admin_successful
-rw-------  1 root root 12410 Mar 14 07:19 .viminfo
-rw-------  1 root root   107 Mar 14 02:19 .Xauthority

root@k8s-worker-02:~# cat ~/.docker/config.json 
{
	"auths": {
		"harbor.local": {
			"auth": "YWRtaW46SGFyYm9yMTIzNDU="
		}
	}

#我们可以用解析工具来看看这个字符串到底是什么
root@k8s-worker-02:~# echo -n "YWRtaW46SGFyYm9yMTIzNDU" | base64 -d
admin:Harbor12345
2. 创建secret资源
复制代码
#创建secret资源
root@k8s-master:~/k8s/configmap# kubectl create secret docker-registry harbor-secret --from-file ~/.docker/config.json 

#查看资源详情
root@k8s-master:~/k8s/configmap# kubectl describe secrets harbor-secret 
Name:         harbor-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/dockerconfigjson

Data
====
.dockerconfigjson:  80 bytes



#上传镜像
oot@k8s-master:~/k8s/configmap# docker tag nginx:latest harbor.local/public/nginx:latest
root@k8s-master:~/k8s/configmap# docker push harbor.local/public/nginx:latest
The push refers to repository [harbor.local/public/nginx]
349873755873: Pushed 
29b2c44d0fb6: Pushed 
4e1fa3adae02: Pushed 
dd02449feb6b: Pushed 
78dca4a0a512: Pushed 
3fe1b6ebcb54: Pushed 
a257f20c716c: Pushed 
latest: digest: sha256:cb25815488f2927c693ddcb46ebd3310d684b042108d230cf6a63f47a4a50649 size: 1778
3. secret使用
复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-dp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-dp
  template:
    metadata:
      labels:
        app: nginx-dp
    spec:
      containers:
      - name: nginx-dp
        image: harbor.local/public/nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - name: nginx-port
          containerPort: 80

      imagePullSecrets:
      - name: harbor-secret

错误分析:

复制代码
}root@k8s-master:~/k8s/secret#kubectl get pod
NAME                       READY   STATUS             RESTARTS   AGE
nginx-dp-69c68b796-bgpnv   0/1     ImagePullBackOff   0          4m31s
nginx-dp-69c68b796-fwgrq   0/1     ImagePullBackOff   0          4m31s
nginx-dp-69c68b796-khkzp   0/1     ImagePullBackOff   0          4m31s
root@k8s-master:~/k8s/secret# kubectl describe pod nginx-dp-69c68b796-bgpnv 
Name:             nginx-dp-69c68b796-bgpnv
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-worker-02/192.168.44.202
Start Time:       Sat, 14 Mar 2026 08:27:49 +0000
Labels:           app=nginx-dp
                  pod-template-hash=69c68b796
Annotations:      <none>
Status:           Pending
IP:               10.244.3.52
IPs:
  IP:           10.244.3.52
Controlled By:  ReplicaSet/nginx-dp-69c68b796
Containers:
  nginx-dp:
    Container ID:   
    Image:          harbor.local/public/nginx:latest
    Image ID:       
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xk9bq (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-xk9bq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  4m40s                  default-scheduler  Successfully assigned default/nginx-dp-69c68b796-bgpnv to k8s-worker-02
  Normal   Pulling    3m14s (x4 over 4m39s)  kubelet            Pulling image "harbor.local/public/nginx:latest"
  Warning  Failed     3m14s (x4 over 4m39s)  kubelet            Failed to pull image "harbor.local/public/nginx:latest": failed to pull and unpack image "harbor.local/public/nginx:latest": failed to resolve image: failed to do request: Head "https://harbor.local/v2/public/nginx/manifests/latest": tls: failed to verify certificate: x509: certificate signed by unknown authority
  Warning  Failed     3m14s (x4 over 4m39s)  kubelet            Error: ErrImagePull
  Warning  Failed     3m3s (x6 over 4m39s)   kubelet            Error: ImagePullBackOff
  Normal   BackOff    2m48s (x7 over 4m39s)  kubelet            Back-off pulling image "harbor.local/public/nginx:latest"

tls: failed to verify certificate: x509: certificate signed by unknown authority 表明 kubelet 在尝试从 Harbor 拉取镜像时,无法验证 Harbor 使用的 TLS 证书 ,而不是镜像拉取凭证(Secret)的问题。你虽然创建了 harbor-secret 并包含了认证信息,但证书验证失败导致 HTTPS 连接无法建立,因此拉取过程在认证之前就失败了。

根本原因

Harbor 仓库 harbor.local 很可能使用了 自签名证书 ,而 Kubernetes 工作节点(k8s-worker-02)的系统 CA 证书库中并没有包含签发该证书的 CA 证书,因此 kubelet 拒绝与 Harbor 建立 TLS 连接。

解决方案

需要让所有需要拉取镜像的节点(本例中为 k8s-worker-02)信任 Harbor 所使用的 CA 证书

复制代码
# 将证书复制到系统 CA 目录
cp /etc/docker/certs.d/harbor.local/ca.crt /usr/local/share/ca-certificates/harbor-ca.crt

# 更新 CA 证书信任
sudo update-ca-certificates

#重启容器运行时(如 containerd 或 Docker)
sudo systemctl restart containerd

如下即配置成功

复制代码
#修改前
root@k8s-master:~/k8s/secret# kubectl get pod
NAME                       READY   STATUS             RESTARTS   AGE
nginx-dp-69c68b796-bgpnv   0/1     ImagePullBackOff   0          5m25s
nginx-dp-69c68b796-fwgrq   0/1     ImagePullBackOff   0          5m25s
nginx-dp-69c68b796-khkzp   0/1     ImagePullBackOff   0          5m25s

#修改后
root@k8s-master:~/k8s/secret# kubectl get pod
NAME                       READY   STATUS    RESTARTS   AGE
nginx-dp-69c68b796-58w4h   1/1     Running   0          3s
nginx-dp-69c68b796-bt6j4   1/1     Running   0          3s
nginx-dp-69c68b796-xrgm7   1/1     Running   0          3s

(今日分享结束,如有帮助,还请点赞三联!!!)

相关推荐
数据知道2 小时前
MongoDB自动化运维脚本:详细讲述日常维护任务批量化处理的实用技巧
运维·mongodb·自动化
数据知道2 小时前
云原生MongoDB:容器化部署与Kubernetes集成详细步骤
mongodb·云原生·kubernetes
无敌海苔咪2 小时前
【解决方案】CentOS 7 网络显示线缆已拔出
linux·运维·centos
无级程序员2 小时前
k8s v1.35配置gateway, istio通过metalb vip访问
kubernetes·gateway·istio
何中应2 小时前
ubuntu如何安装nvm
linux·运维·ubuntu·node.js
cuijiecheng20182 小时前
Linux下CPP-DateTime-library库的使用
linux·运维·服务器
ai产品老杨2 小时前
跨越指令集鸿沟:基于K8s编排与Docker多架构镜像的GB28181/RTSP异构AI视频底座构建
docker·架构·kubernetes
文静小土豆2 小时前
Docker 网络配置指南:Bridge、Host、None、Container 全攻略
网络·docker·容器
ChaITSimpleLove2 小时前
PostgreSQL 部署与运维常用命令详解
运维·数据库·postgresql·部署·命令解析