openEuler 系统 Kubernetes + Harbor 学习测试环境详细部署指南

openEuler 系统 Kubernetes + Harbor 学习/测试环境详细部署指南

一、环境规划

1.1 节点架构(3节点最小化)

节点角色 主机名 IP地址 配置要求 部署组件
Master k8s-master01 192.168.100.167 2 vCPU / 4GB / 100GB K8s控制平面+Harbor
Worker1 k8s-worker01 192.168.100.168 2 vCPU / 4GB / 100GB K8s工作节点
Worker2 k8s-worker02 192.168.100.169 2 vCPU / 4GB / 100GB K8s工作节点

1.2 软件版本推荐

组件 版本 说明
openEuler 22.03 LTS / 24.03 LTS 推荐LTS版本
Kubernetes 1.28.x / 1.29.x 稳定版本
Harbor 2.10.x / 2.11.x 最新稳定版
容器运行时 containerd 1.7.x openEuler内置支持
Docker 25.x / 26.x Harbor依赖

二、系统初始化(所有节点执行)

2.1 配置主机名和hosts

bash 复制代码
# 在Master节点执行
hostnamectl set-hostname k8s-master01

# 在Worker1节点执行
hostnamectl set-hostname k8s-worker1

# 在Worker2节点执行
hostnamectl set-hostname k8s-worker2

# 所有节点配置hosts文件
cat >> /etc/hosts << EOF
192.168.100.167 k8s-master01
192.168.100.168 k8s-worker1
192.168.100.169 k8s-worker2
EOF

2.2 关闭防火墙和SELinux

bash 复制代码
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭SELinux
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 关闭swap
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

2.3 配置内核参数

bash 复制代码
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
EOF

# 加载内核模块
modprobe br_netfilter
modprobe overlay
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
modprobe nf_conntrack

# 应用配置
sysctl --system

2.4 配置时间同步

bash 复制代码
# 安装chrony
dnf install -y chrony

# 同步阿里云时间服务器
[root@k8s-master01 ~]# vi /etc/chrony.conf
pool time1.aliyun.com iburst  # <--- 此处修改
# server 192.168.1.100 iburst 或者指定你内网的时间服务器

....省略N


# 允许哪些网段的机器来同步你的时间
# 示例:允许 192.168.1.0/24 整个网段
# Allow NTP client access from local network.
# allow 192.168.1.0/24

# 或者,允许单个 IP 地址
# allow 192.168.1.50

...省略N
[root@k8s-master01 ~]# 


# 启动服务
systemctl enable --now chronyd

# 验证时间同步
[root@k8s-master01 ~]# chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   8   377    79   -928us[-1602us] +/-   16ms
# 关注点 203.107.6.88 时间服务器地址。3台执行一致。

三、Kubernetes集群部署

3.1 安装容器运行时(containerd)

bash 复制代码
# 所有节点执行(或者安装 docker-ce ,因为安装docker-ce时会自动安装containerd)
dnf install -y containerd

# 生成默认配置
containerd config default > /etc/containerd/config.toml

# 修改配置,使用systemd cgroup驱动
[root@k8s-master01 ~]# grep 'SystemdCgroup' /etc/containerd/config.toml 
            SystemdCgroup = false
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# grep 'SystemdCgroup' /etc/containerd/config.toml 
            SystemdCgroup = true
[root@k8s-master01 ~]# 


# 精准匹配当前的 sandbox_image 行,替换为阿里云的镜像地址
[root@k8s-master01 ~]# grep 'sandbox_image' /etc/containerd/config.toml 
    sandbox_image = "registry.k8s.io/pause:3.6"
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# grep 'sandbox_image' /etc/containerd/config.toml 
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
[root@k8s-master01 ~]# 
# 替换这个配置,就是把「国内访问不了的国外基础镜像地址」,改成「国内阿里云的高速镜像地址」,解决 K8s 集群因为网络问题无法启动的致命问题。

# 配置华为云镜像加速(可选)
[root@k8s-master01 ~]# vi /etc/containerd/config.toml
...省略N
    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      #[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      # 配置华为云加速器,添加以下内容。
      # 1.登录 华为云 SWR 控制台。
      # 2.左侧:镜像资源 → 镜像中心 → 镜像加速器
      # 3.复制你的专属地址(格式:https://<你的账号ID>.mirror.swr.myhuaweicloud.com)
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://448b823acc1c4d5b8cf5c81ea9bfce60.mirror.swr.myhuaweicloud.com"]
      # 下面2行是配置私有镜像仓库地址
      # [plugins."io.containerd.grpc.v1.cri".registry.mirrors."私有仓库地址"]
      #  endpoint = ["http://私有仓库地址"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

...省略N
[root@k8s-master01 ~]# 

# 重启containerd
systemctl daemon-reload
systemctl enable --now containerd
systemctl restart containerd

# 安装 crictl 命令
dnf install -y cri-tools

# 让 crictl 直接指定连接 containerd,不再遍历废弃套接字
# 创建 crictl 配置文件,指定 containerd 套接字
cat > /etc/crictl.yaml << EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

# 验证,执行下面命令,看到以下内容,containerd 华为云加速器配置已经成功生效!
[root@k8s-master01 ~]# crictl info | grep -C 5 "myhuaweicloud" 
    "registry": {
      "configPath": "",
      "mirrors": {
        "docker.io": {
          "endpoint": [
            "https://448b823acc1c4d5b8cf5c81ea9bfce60.mirror.swr.myhuaweicloud.com"
          ]
        }
      },
      "configs": {},
      "auths": {},
[root@k8s-master01 ~]#

3.2 安装Kubernetes组件

bash 复制代码
# 所有节点执行 - 添加Kubernetes仓库.(需要安装 1.30 版本,则需要将如下配置中的 v1.29 替换成 v1.30)
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

# 安装kubeadm、kubelet、kubectl
dnf install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

# 关键参数:取消 kubernetes 软件源的安装限制 --disableexcludes=kubernetes

# 设置开机启动
systemctl enable --now kubelet

3.3 初始化Master节点

bash 复制代码
# 仅在Master节点执行
[root@k8s-master01 ~]# kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.96.0.0/12 \
  --apiserver-advertise-address=192.168.100.167 \
  --image-repository=registry.aliyuncs.com/google_containers \
  --kubernetes-version=v1.29.0
[init] Using Kubernetes version: v1.29.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.167:6443 --token kl70ra.gfbd42c8rcj2n6rs \
	--discovery-token-ca-cert-hash sha256:462449dca061c14dcc97b00f34c1611ad80ed5bd8e099ad4da647db6589714ba 
[root@k8s-master01 ~]# 

# 此处我是管理员所以用这种方式
[root@k8s-master01 ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
[root@k8s-master01 ~]# source /etc/profile.d/k8s.sh
[root@k8s-master01 ~]# echo $KUBECONFIG
/etc/kubernetes/admin.conf
[root@k8s-master01 ~]# kubectl get nodes  
NAME           STATUS     ROLES           AGE     VERSION
k8s-master01   NotReady   control-plane   2m56s   v1.29.15
[root@k8s-master01 ~]# 

# 普通用户的配置,配置kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

# 保存join命令(后续Worker节点使用)
# 自动生成「工作节点(Worker Node)加入 Kubernetes 主节点(Master)」的完整命令,让你直接复制到工作节点执行,就能把节点加入集群。
kubeadm token create --print-join-command

# kubeadm token create:创建一个新的、有效的集群加入令牌(token)
  ✅ 主节点初始化时的默认 token 24 小时后会过期,过期就无法加入节点,必须重新生成。
# --print-join-command:直接打印出完整的加入命令(包含主节点地址、token、安全校验码)

3.4 部署网络插件(Calico)

bash 复制代码
# 仅在Master节点执行
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml

poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@k8s-master01 ~]# 


# 验证Pod状态,状态必须都是 Running
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-74d5f9d7bb-hj9qd   1/1     Running   0          3m32s   10.244.32.130     k8s-master01   <none>           <none>
calico-node-lhpcq                          1/1     Running   0          3m32s   192.168.100.167   k8s-master01   <none>           <none>
coredns-857d9ff4c9-nzn7s                   1/1     Running   0          8m52s   10.244.32.129     k8s-master01   <none>           <none>
coredns-857d9ff4c9-swhdb                   1/1     Running   0          8m52s   10.244.32.131     k8s-master01   <none>           <none>
etcd-k8s-master01                          1/1     Running   0          9m8s    192.168.100.167   k8s-master01   <none>           <none>
kube-apiserver-k8s-master01                1/1     Running   0          9m8s    192.168.100.167   k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01       1/1     Running   0          9m8s    192.168.100.167   k8s-master01   <none>           <none>
kube-proxy-76hvt                           1/1     Running   0          8m53s   192.168.100.167   k8s-master01   <none>           <none>
kube-scheduler-k8s-master01                1/1     Running   0          9m8s    192.168.100.167   k8s-master01   <none>           <none>
[root@k8s-master01 ~]# 

# 这时,你应该想到一个场景,如果在内网环境下,下载不了。只能找一台可以访问外网的机器,把所有需要的镜像下载下来,再传到内网环境里。
## 首先查看 kube-system 命名空间下运行的 Pod 所使用的镜像,这些镜像正是 worker 节点需要预加载的(尤其是 calico-node、kube-proxy、coredns 等)。
[root@k8s-master01 ~]# kubectl get pods -n kube-system -o jsonpath='{range .items[*]}{.spec.containers[*].image}{"\n"}{end}' | sort -u
docker.io/calico/kube-controllers:v3.26.0
docker.io/calico/node:v3.26.0
registry.aliyuncs.com/google_containers/coredns:v1.11.1
registry.aliyuncs.com/google_containers/etcd:3.5.16-0
registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.29.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.0

## 进入 /calico 目录(您已创建),使用 ctr 导出所有镜像到一个 tar 文件。
## 请确保使用完整的镜像名称(包含仓库前缀),否则 ctr 会提示 not found。
[root@k8s-master01 ~]# cd /calico
[root@k8s-master01 calico]# ctr -n k8s.io image export all-images.tar \
  docker.io/calico/kube-controllers:v3.26.0 \
  docker.io/calico/node:v3.26.0 \
  registry.aliyuncs.com/google_containers/coredns:v1.11.1 \
  registry.aliyuncs.com/google_containers/etcd:3.5.16-0 \
  registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.0 \
  registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.0 \
  registry.aliyuncs.com/google_containers/kube-proxy:v1.29.0 \
  registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.0
[root@k8s-master01 calico]# ls -l
总计 302632
-rw-r--r-- 1 root root 309894144  3月28日 12:57 all-images.tar
[root@k8s-master01 calico]# 

# 传到所需的节点
# 导入所有镜像到 containerd
ctr -n k8s.io image import /root/all-images.tar
# 验证。
ctr -n k8s.io image ls | grep -E "calico|coredns|kube-proxy"

3.5 Worker节点加入集群

bash 复制代码
# 在Worker1和Worker2节点执行(使用Master生成的join命令)
[root@k8s-worker1 ~]# kubeadm join 192.168.100.167:6443 --token kl70ra.gfbd42c8rcj2n6rs \
        --discovery-token-ca-cert-hash sha256:462449dca061c14dcc97b00f34c1611ad80ed5bd8e099ad4da647db6589714ba
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-worker1 ~]# 

# 在Master节点验证
[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   25m     v1.29.15
k8s-worker1    Ready    <none>          4m32s   v1.29.15
k8s-worker2    Ready    <none>          4m26s   v1.29.15
[root@k8s-master01 ~]# 

# 如 worker节点也需要执行 kubectl get nodes,请在master执行以下命令
[root@k8s-master01 ~]# scp /etc/kubernetes/admin.conf k8s-worker1:/etc/kubernetes/
The authenticity of host 'k8s-worker1 (192.168.100.168)' can't be established.
ED25519 key fingerprint is SHA256:EdX1RSTbnmrAOzO+gVSS6cXdt0ty/8HykXTCxXAsPF8.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'k8s-worker1' (ED25519) to the list of known hosts.

Authorized users only. All activities may be monitored and reported.
root@k8s-worker1's password: 
admin.conf                                                                                        100% 5659     2.4MB/s   00:00    
[root@k8s-master01 ~]# scp /etc/kubernetes/admin.conf k8s-worker2:/etc/kubernetes/
The authenticity of host 'k8s-worker2 (192.168.100.169)' can't be established.
ED25519 key fingerprint is SHA256:R9clN3Zr9xVbGktc4L0jFVbTh03wrbspS3fESATKuKk.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'k8s-worker2' (ED25519) to the list of known hosts.

Authorized users only. All activities may be monitored and reported.
root@k8s-worker2's password: 
admin.conf                                                                                        100% 5659     3.4MB/s   00:00    
[root@k8s-master01 ~]# scp /etc/profile.d/k8s.sh k8s-worker1:/etc/profile.d/

Authorized users only. All activities may be monitored and reported.
root@k8s-worker1's password: 
k8s.sh                                                                                            100%   45    68.4KB/s   00:00    
[root@k8s-master01 ~]# scp /etc/profile.d/k8s.sh k8s-worker2:/etc/profile.d/

Authorized users only. All activities may be monitored and reported.
root@k8s-worker2's password: 
k8s.sh                                                                                            100%   45    28.4KB/s   00:00    
[root@k8s-master01 ~]#

# worker 节点
[root@k8s-worker1 ~]# bash


Welcome to 6.6.0-132.0.0.111.oe2403sp3.x86_64

System information as of time: 	2026年 03月 28日 星期六 12:50:42 CST

System load: 	0.00
Memory used: 	13.0%
Swap used: 	0.0%
Usage On: 	5%
IP address: 	192.168.100.168
IP address: 	10.244.194.64
Users online: 	1


[root@k8s-worker1 ~]# kubectl get node
NAME           STATUS   ROLES           AGE     VERSION
k8s-master01   Ready    control-plane   26m     v1.29.15
k8s-worker1    Ready    <none>          6m26s   v1.29.15
k8s-worker2    Ready    <none>          6m20s   v1.29.15
[root@k8s-worker1 ~]# 

四、Harbor私有仓库部署

4.1 安装Docker和Docker Compose

bash 复制代码
# 在Harbor部署节点执行(Master节点)
# 配置docker镜像源
[root@k8s-master01 ~]# cat /etc/yum.repos.d/docker-ce.repo 
[docker]
name=Docker
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
[root@k8s-master01 ~]#

# 备份以防安装后覆盖
[root@k8s-master01 ~]# cp /etc/containerd/config.toml /etc/containerd/config.toml.bakup

# 查看可安装的docker的版本
[root@k8s-master01 ~]# yum list docker-ce --showduplicate |sort -r | head -n 10
Last metadata expiration check: 0:03:03 ago on 2026年03月28日 星期六 13时07分54秒.
docker-ce.x86_64                     3:26.1.3-1.el8                       docker
docker-ce.x86_64                     3:26.1.2-1.el8                       docker
docker-ce.x86_64                     3:26.1.1-1.el8                       docker
docker-ce.x86_64                     3:26.1.0-1.el8                       docker
docker-ce.x86_64                     3:26.0.2-1.el8                       docker
docker-ce.x86_64                     3:26.0.1-1.el8                       docker
docker-ce.x86_64                     3:26.0.0-1.el8                       docker
docker-ce.x86_64                     3:25.0.5-1.el8                       docker
docker-ce.x86_64                     3:25.0.4-1.el8                       docker
[root@k8s-master01 ~]# 

# 安装Docker
[root@k8s-master01 ~]# dnf install -y docker-ce

# 配置Docker加速器
[root@k8s-master01 ~]# vim /etc/docker/daemon.json
cat > /etc/docker/daemon.json <<'EOF'
{
  "registry-mirrors": [ "https://448b823acc1c4d5b8cf5c81ea9bfce60.mirror.swr.myhuaweicloud.com" ],
  "bip": "172.98.0.1/16",
  "default-address-pools": [
    {
      "base": "172.99.0.0/16",
      "size": 24
    }
  ],
  "iptables": true,
  "ip-forward": true,
  "ip-masq": true
}
EOF

# "bip": "172.98.0.1/24": 强制 docker0 网卡 IP = 172.98.0.1,永远不变
# "default-address-pools":告诉 Docker:以后自动创建的所有网桥(br-xxx),只能用 172.99.0.0 网段,再也不会生成 172.17/172.18 了
# 为什么我需要修改docker 默认提供的IP段呢?因为跟现网有冲突,需要换成172.98.0.1跟172.99.0.1网段。
# 其他项:开启网络转发、防火墙,保证 Docker 网络正常

# 启动Docker
[root@k8s-master01 ~]# systemctl enable --now docker

# 安装Docker Compose,(如下载慢,可以使用wget下载后再已过去)
[root@k8s-master01 ~]# curl -L "https://github.com/docker/compose/releases/download/v2.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /uocal/bin/docker-compose
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
100 58.1M  100 58.1M    0     0  80975      0  0:12:32  0:12:32 --:--:-- 65201
[root@k8s-master01 ~]# ls /usr/local/bin/docker-compose -ld
-rw-r--r-- 1 root root 60932257  3月28日 13:35 /usr/local/bin/docker-compose
[root@k8s-master01 ~]# chmod +x /usr/local/bin/docker-compose
[root@k8s-master01 ~]# ls /usr/local/bin/docker-compose -ld
-rwxr-xr-x 1 root root 60932257  3月28日 13:35 /usr/local/bin/docker-compose

# 验证
[root@k8s-master01 ~]# docker --version
Docker version 26.1.3, build b72abbb
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# docker-compose --version
Docker Compose version v2.24.0
[root@k8s-master01 ~]# 

4.2 下载Harbor安装包

bash 复制代码
# 下载Harbor
cd /opt
wget https://github.com/goharbor/harbor/releases/download/v2.11.0/harbor-online-installer-v2.11.0.tgz

# 解压
[root@k8s-master01 opt]# tar -xvf harbor-online-installer-v2.11.0.tgz
harbor/prepare
harbor/LICENSE
harbor/install.sh
harbor/common.sh
harbor/harbor.yml.tmpl
[root@k8s-master01 opt]# 

[root@k8s-master01 opt]# cd harbor/
[root@k8s-master01 harbor]# ls
common.sh  harbor.yml.tmpl  install.sh  LICENSE  prepare
[root@k8s-master01 harbor]# 

4.3 配置Harbor

bash 复制代码
# 复制配置文件
[root@k8s-master01 harbor]# pwd
/opt/harbor
cp harbor.yml.tmpl harbor.yml

# 编辑配置文件
[root@k8s-master01 harbor]# pwd
/opt/harbor
[root@k8s-master01 harbor]# vi harbor.yml

关键配置项修改:

yaml 复制代码
# 修改hostname为Master节点IP或域名
hostname: 192.168.100.167

# 修改HTTP端口(可选)
http:
  port: 8080

...省略N

# 注释掉HTTPS配置(测试环境可不用)
# https:
#   port: 443
#   certificate: /your/cert/path
#   private_key: /your/key/path

...省略N
# 配置数据目录
data_volume: /data/harbor

4.4 安装Harbor

bash 复制代码
# 创建存储数据目录
[root@k8s-master01 ~]# mkdir -p /data/harbor

# 执行安装脚本
[root@k8s-master01 ~]# cd /opt/harbor/
[root@k8s-master01 harbor]# ls
common.sh  harbor.yml  harbor.yml.tmpl  input  install.sh  LICENSE  prepare
[root@k8s-master01 harbor]# ./install.sh 

[Step 0]: checking if docker is installed ...

Note: docker version: 26.1.3

[Step 1]: checking docker-compose is installed ...

Note: Docker Compose version v2.27.0


[Step 2]: preparing environment ...

[Step 3]: preparing harbor configs ...
prepare base dir is set to /opt/harbor
Unable to find image 'goharbor/prepare:v2.11.0' locally
v2.11.0: Pulling from goharbor/prepare
2a715ab8d96e: Pull complete 
a4464fa7f83c: Pull complete 
e614c191ff97: Pull complete 
c7c6a92d495e: Pull complete 
8a1870f4fc3a: Pull complete 
476340c45540: Pull complete 
daddd88513e3: Pull complete 
76f33058ccdc: Pull complete 
9cc7b9f5c87d: Pull complete 
58fedf6bbb75: Pull complete 
Digest: sha256:b7062a3af02c1f32100e0da3576433b5cd419793039779cb3353c4af0d4f9a62
Status: Downloaded newer image for goharbor/prepare:v2.11.0
WARNING:root:WARNING: HTTP protocol is insecure. Harbor will deprecate http protocol in the future. Please make sure to upgrade to https
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir


Note: stopping existing Harbor instance ...
WARN[0000] /opt/harbor/docker-compose.yml: `version` is obsolete 


[Step 4]: starting Harbor ...
WARN[0000] /opt/harbor/docker-compose.yml: `version` is obsolete 
[+] Running 62/17
 ✔ proxy Pulled                                                                                                                   35.5s 
 ✔ postgresql Pulled                                                                                                              23.3s 
 ✔ registryctl Pulled                                                                                                             30.5s 
 ✔ jobservice Pulled                                                                                                              43.7s 
 ✔ registry Pulled                                                                                                                45.9s 
 ✔ portal Pulled                                                                                                                  39.2s 
 ✔ redis Pulled                                                                                                                   25.7s 
 ✔ core Pulled                                                                                                                    52.8s 
 ✔ log Pulled                                                                                                                      9.8s 
                                                                                                                                   
[+] Running 10/10
 ✔ Network harbor_harbor        Created                                                                                            0.1s 
 ✔ Container harbor-log         Started                                                                                            0.8s 
 ✔ Container redis              Started                                                                                            3.0s 
 ✔ Container harbor-portal      Started                                                                                            2.9s 
 ✔ Container registry           Started                                                                                            3.2s 
 ✔ Container harbor-db          Started                                                                                            3.2s 
 ✔ Container registryctl        Started                                                                                            3.2s 
 ✔ Container harbor-core        Started                                                                                            3.7s 
 ✔ Container harbor-jobservice  Started                                                                                            5.0s 
 ✔ Container nginx              Started                                                                                            5.0s 
✔ ----Harbor has been installed and started successfully.----
[root@k8s-master01 harbor]# 


# 验证Harbor服务
[root@k8s-master01 harbor]# docker-compose ps
NAME                IMAGE                                 COMMAND                   SERVICE       CREATED          STATUS                    PORTS
harbor-core         goharbor/harbor-core:v2.11.0          "/harbor/entrypoint...."   core          57 seconds ago   Up 53 seconds (healthy)   
harbor-db           goharbor/harbor-db:v2.11.0            "/docker-entrypoint...."   postgresql    57 seconds ago   Up 53 seconds (healthy)   
harbor-jobservice   goharbor/harbor-jobservice:v2.11.0    "/harbor/entrypoint...."   jobservice    57 seconds ago   Up 49 seconds (healthy)   
harbor-log          goharbor/harbor-log:v2.11.0           "/bin/sh -c /usr/loc..."   log           57 seconds ago   Up 56 seconds (healthy)   127.0.0.1:1514->10514/tcp
harbor-portal       goharbor/harbor-portal:v2.11.0        "nginx -g 'daemon of..."   portal        57 seconds ago   Up 53 seconds (healthy)   
nginx               goharbor/nginx-photon:v2.11.0         "nginx -g 'daemon of..."   proxy         57 seconds ago   Up 51 seconds (healthy)   0.0.0.0:8080->8080/tcp, :::8080->8080/tcp
redis               goharbor/redis-photon:v2.11.0         "redis-server /etc/r..."   redis         57 seconds ago   Up 53 seconds (healthy)   
registry            goharbor/registry-photon:v2.11.0      "/home/harbor/entryp..."   registry      57 seconds ago   Up 53 seconds (healthy)   
registryctl         goharbor/harbor-registryctl:v2.11.0   "/home/harbor/start...."   registryctl   57 seconds ago   Up 53 seconds (healthy)   
[root@k8s-master01 harbor]# 

# 查看日志
docker-compose logs -f

# 允许 Docker 信任并连接你内网的 HTTP 私有仓库(Harbor)
[root@k8s-master01 ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors": [ "https://448b823acc1c4d5b8cf5c81ea9bfce60.mirror.swr.myhuaweicloud.com" ],
  "insecure-registries": ["192.168.100.167:8080"],
  "bip": "172.98.0.1/16",
  "default-address-pools": [
    {
      "base": "172.99.0.0/16",
      "size": 24
    }
  ],
  "iptables": true,
  "ip-forward": true,
  "ip-masq": true
}
[root@k8s-master01 ~]# systemctl restart docker.service
# insecure-registries 就是给 Docker 开个「内网白名单」,让它允许连接你自己搭建的、没有 HTTPS 证书的私有镜像仓库!
# 不加[insecure-registries],执行 docker push/pull 上传 / 下载镜像时,Docker 直接报错拒绝连接. 报错信息:http: server gave HTTP response to HTTPS client

4.5 访问Harbor管理界面

  • 访问地址 : http://192.168.100.167:8080
  • 默认账号 : admin
  • 默认密码 : Harbor12345(建议首次登录后修改)

五、Kubernetes与Harbor集成

5.1 配置K8s节点信任Harbor证书

bash 复制代码
# 在所有K8s节点执行

# 在 config.toml 文件添加私有仓库ip或域名,如果是域名必须能解析域名。
[root@k8s-master01 ~]# grep -C 10 "myhuaweicloud" /etc/containerd/config.toml
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      #[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
        endpoint = ["https://448b823acc1c4d5b8cf5c81ea9bfce60.mirror.swr.myhuaweicloud.com"]
      # 添加私有仓库
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.100.167:8080"]
        endpoint = ["http://192.168.100.167:8080"]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"
[root@k8s-master01 ~]# 

# 重启
systemctl restart containerd

5.2 在K8s中创建镜像仓库密钥

bash 复制代码
# 在Master节点执行
kubectl create secret docker-registry harbor-secret \
  --docker-server=192.168.100.167:8080 \
  --docker-username=admin \
  --docker-password=Harbor12345 \
  --docker-email=admin@example.com \
  -n default


# 查看密钥
[root@k8s-master01 ~]# kubectl get secret harbor-secret
NAME            TYPE                             DATA   AGE
harbor-secret   kubernetes.io/dockerconfigjson   1      26s
[root@k8s-master01 ~]# 

5.3 推送镜像到Harbor

bash 复制代码
# 登录Harbor
[root@k8s-master01 ~]# docker login 192.168.100.167:8080 -u admin -p Harbor12345
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[root@k8s-master01 ~]# 


# 拉取测试镜像
[root@k8s-master01 ~]# docker pull nginx:latest
latest: Pulling from library/nginx
ec781dee3f47: Pull complete 
bb3d0aa29654: Pull complete 
510ddf6557d6: Pull complete 
cde7a05ae428: Pull complete 
587e3d84dbb5: Pull complete 
3189680c601f: Pull complete 
5e815e07e569: Pull complete 
Digest: sha256:7150b3a39203cb5bee612ff4a9d18774f8c7caf6399d6e8985e97e28eb751c18
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
[root@k8s-master01 ~]# 

# 重新打标签
[root@k8s-master01 ~]# docker tag nginx:latest 192.168.100.167:8080/library/nginx:harbor-latest
[root@k8s-master01 ~]# docker images
REPOSITORY                           TAG             IMAGE ID       CREATED         SIZE
nginx                                latest          0cf1d6af5ca7   3 days ago      161MB
192.168.100.167:8080/library/nginx   harbor-latest   0cf1d6af5ca7   3 days ago      161MB
goharbor/redis-photon                v2.11.0         184984d263c2   22 months ago   165MB
goharbor/harbor-registryctl          v2.11.0         f1220f69df90   22 months ago   162MB
goharbor/registry-photon             v2.11.0         95046ed33f52   22 months ago   84.5MB
goharbor/nginx-photon                v2.11.0         681ba9915791   22 months ago   153MB
goharbor/harbor-log                  v2.11.0         a0a812a07568   22 months ago   163MB
goharbor/harbor-jobservice           v2.11.0         bba862a3784a   22 months ago   159MB
goharbor/harbor-core                 v2.11.0         2cf11c05e0e2   22 months ago   185MB
goharbor/harbor-portal               v2.11.0         ea8fda08df5b   22 months ago   162MB
goharbor/harbor-db                   v2.11.0         9bd788ea0df6   22 months ago   271MB
goharbor/prepare                     v2.11.0         2baf15fbf5e2   22 months ago   207MB


# 推送到Harbor
[root@k8s-master01 ~]# docker push 192.168.100.167:8080/library/nginx:harbor-latest
The push refers to repository [192.168.100.167:8080/library/nginx]
4e0a2a122e2f: Pushed 
794b45c9a1a2: Pushed 
190ba8fba6a7: Pushed 
1e9759e65d38: Pushed 
2c12d33655c1: Pushed 
bcce8ea688d8: Pushed 
188c9b34dfbe: Pushed 
harbor-latest: digest: sha256:d5590adee87e29c44bc13bfae4492585c861b9893e60b48c86728bf179f5d096 size: 1778
[root@k8s-master01 ~]# 

5.4 部署使用Harbor镜像的应用

bash 复制代码
# 创建测试Deployment
cat > nginx-harbor.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-harbor
  namespace: default
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-harbor
  template:
    metadata:
      labels:
        app: nginx-harbor
    spec:
      imagePullSecrets:
      - name: harbor-secret
      containers:
      - name: nginx
        image: 192.168.100.167:8080/library/nginx:harbor-latest
        ports:
        - containerPort: 80
EOF

# 部署应用
[root@k8s-master01 ~]# kubectl apply -f nginx-harbor.yaml 
deployment.apps/nginx-harbor created

# 验证
[root@k8s-master01 ~]# kubectl get pods
NAME                            READY   STATUS    RESTARTS   AGE
nginx-harbor-69b6479767-89gwh   1/1     Running   0          5s
nginx-harbor-69b6479767-t26w4   1/1     Running   0          5s
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]# kubectl describe pod nginx-harbor-69b6479767-89gwh
Name:             nginx-harbor-69b6479767-89gwh
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-worker1/192.168.100.168
Start Time:       Sat, 28 Mar 2026 15:23:48 +0800
Labels:           app=nginx-harbor
                  pod-template-hash=69b6479767
Annotations:      cni.projectcalico.org/containerID: 1322deb4a382703f13b9a97dc45f902afb93e3ef6757dfb7a3f5de8102b5cff5
                  cni.projectcalico.org/podIP: 10.244.194.69/32
                  cni.projectcalico.org/podIPs: 10.244.194.69/32
Status:           Running
IP:               10.244.194.69
IPs:
  IP:           10.244.194.69
...省略N
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  19s   default-scheduler  Successfully assigned default/nginx-harbor-69b6479767-89gwh to k8s-worker1
  Normal  Pulled     18s   kubelet            Container image "192.168.100.167:8080/library/nginx:harbor-latest" already present on machine
  Normal  Created    18s   kubelet            Created container: nginx
  Normal  Started    18s   kubelet            Started container nginx
[root@k8s-master01 ~]# 

六、验证与测试

6.1 集群状态检查

bash 复制代码
# 检查节点状态
kubectl get nodes

# 检查系统Pod
kubectl get pods -n kube-system

# 检查集群信息
[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.100.167:6443
CoreDNS is running at https://192.168.100.167:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master01 ~]# 

# K8s 默认给 Master 节点加了污点,禁止调度普通 Pod,执行这条命令移除污点即可,pod就能运行在master 节点上。
kubectl taint nodes k8s-master01 node-role.kubernetes.io/control-plane-

6.2 Harbor功能测试

bash 复制代码
# 检查Harbor容器状态
docker-compose ps -a

# 访问Harbor API
[root@k8s-master01 ~]# curl -u admin:Harbor12345 http://192.168.100.167:8080/api/v2.0/projects
[{"creation_time":"2026-03-28T05:55:17.609Z","current_user_role_id":1,"current_user_role_ids":[1],"cve_allowlist":{"creation_time":"0001-01-01T00:00:00.000Z","id":1,"items":[],"project_id":1,"update_time":"0001-01-01T00:00:00.000Z"},"metadata":{"public":"true"},"name":"library","owner_id":1,"owner_name":"admin","project_id":1,"repo_count":1,"update_time":"2026-03-28T05:55:17.609Z"}]

# 上面的显示太乱,看着不舒服,安装jq
[root@k8s-master01 ~]# yum install jq

[root@k8s-master01 ~]# curl -u admin:Harbor12345 http://192.168.100.167:8080/api/v2.0/projects | jq '.'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   386  100   386    0     0  13367      0 --:--:-- --:--:-- --:--:-- 13785
[
  {
    "creation_time": "2026-03-28T05:55:17.609Z",
    "current_user_role_id": 1,
    "current_user_role_ids": [
      1
    ],
    "cve_allowlist": {
      "creation_time": "0001-01-01T00:00:00.000Z",
      "id": 1,
      "items": [],
      "project_id": 1,
      "update_time": "0001-01-01T00:00:00.000Z"
    },
    "metadata": {
      "public": "true"
    },
    "name": "library",
    "owner_id": 1,
    "owner_name": "admin",
    "project_id": 1,
    "repo_count": 1,
    "update_time": "2026-03-28T05:55:17.609Z"
  }
]
[root@k8s-master01 ~]# 

# 或者用Linux自带的,优点:系统自带无需额外安装,适合快速查看
curl -u admin:Harbor12345 http://192.168.100.167:8080/api/v2.0/projects | python3 -m json.tool

# 查看镜像仓库
[root@k8s-master01 ~]# curl -u admin:Harbor12345 http://192.168.100.167:8080/api/v2.0/projects/library/repositories | python3 -m json.tool
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   167  100   167    0     0  12170      0 --:--:-- --:--:-- --:--:-- 12846
[
    {
        "artifact_count": 1,
        "creation_time": "2026-03-28T06:57:28.416Z",
        "id": 1,
        "name": "library/nginx",
        "project_id": 1,
        "pull_count": 2,
        "update_time": "2026-03-28T07:15:42.156Z"
    }
]
[root@k8s-master01 ~]# 

6.3 应用访问测试

bash 复制代码
# 创建Service暴露应用
root@k8s-master01 ~]# kubectl expose deployment nginx-harbor --port=80 --type=NodePort
service/nginx-harbor exposed
[root@k8s-master01 ~]# kubectl get deployments.apps 
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
nginx-harbor   2/2     2            2           8m46s
[root@k8s-master01 ~]# 


# 查看Service
[root@k8s-master01 ~]# kubectl get service
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP        3h8m
nginx-harbor   NodePort    10.102.145.147   <none>        80:31250/TCP   10s
[root@k8s-master01 ~]#

# 访问应用(使用任意Worker节点IP + NodePort)
curl http://<worker-node-ip>:<node-port>

# 也可到浏览器,使用系统IP访问。
http://192.168.100.167:31250/

七、常见问题排查

问题 可能原因 解决方案
Pod状态Pending 网络插件未就绪 检查Calico Pod状态
镜像拉取失败 Harbor认证问题 检查imagePullSecrets配置
节点NotReady kubelet服务异常 systemctl status kubelet
Harbor无法访问 防火墙/端口问题 检查8080端口是否开放
证书验证失败 HTTPS证书问题 配置insecure-registries

常用排查命令

bash 复制代码
# 查看Pod详细状态
kubectl describe pod <pod-name> -n <namespace>

# 查看Pod日志
kubectl logs <pod-name> -n <namespace>

# 查看节点资源
kubectl top nodes

# 查看Harbor日志
docker-compose logs -f core

八、学习建议

推荐学习路径

  1. 第1周: 完成基础环境搭建,熟悉K8s基本概念
  2. 第2周: 部署简单应用,理解Pod/Deployment/Service
  3. 第3周: 配置Harbor,练习镜像推送拉取
  4. 第4周: 学习ConfigMap/Secret/持久化存储
  5. 第5周: 探索Ingress/网络策略/资源限制

实验建议

  • ✅ 尝试部署不同应用(MySQL、Redis、WordPress)
  • ✅ 练习滚动更新和回滚操作
  • ✅ 配置资源限制和HPA自动扩缩容
  • ✅ 尝试Harbor的镜像扫描和复制功能
  • ✅ 学习使用Helm部署应用

九、资源清理(可选)

bash 复制代码
# 重置Kubernetes集群
kubeadm reset -f

# 清理网络配置
ipvsadm --clear
iptables -F && iptables -t nat -F && iptables -t mangle -F
iptables -X

# 卸载Harbor
cd /opt/harbor
docker-compose down -v

# 清理数据(谨慎操作)
rm -rf /data/harbor

💡 提示: 学习/测试环境建议定期快照备份,方便重置和恢复。生产环境需要配置高可用、备份策略和监控告警。

相关推荐
天草二十六_简村人3 小时前
阿里云SLS采集jvm日志(上)
java·运维·数据库·后端·阿里云·容器·云计算
winfreedoms3 小时前
宿主机有网、Docker 容器不能解析域名?用 daemon.json 一键配置永久 DNS
运维·docker·容器·json
橙露4 小时前
Docker 实战:镜像瘦身、多阶段构建与最佳实践
运维·docker·容器
2601_949814494 小时前
使用Kubernetes部署Spring Boot项目
spring boot·容器·kubernetes
小李小李快乐不已5 小时前
docker(1)-环境和基本概念
运维·c++·docker·容器
God__is__a__girl5 小时前
Docker Desktop 在 Windows 上启动失败:500 Internal Server Error 完整排查与修复指南
windows·docker·容器
摸鱼的后端5 小时前
Docker容器中Kingbase数据库授权到期更换解决方案
数据库·docker·容器
yuweiade6 小时前
使用 Docker 部署 RabbitMQ 的详细指南
docker·容器·rabbitmq
舒一笑7 小时前
Docker 场景下 Linux 内核参数调优:哪些必须改宿主机?哪些可以写进容器
docker·容器