ubuntu k8s1.32集群安装

设置允许root登录

所有机器都需要操作

bash 复制代码
设置root账号密码
sudo passwd root

修改ssh配置/etc/ssh/sshd_config
将配置 #PermitRootLogin prohibit-password 修改为 PermitRootLogin yes
sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config

重启ssh服务
sudo systemctl restart ssh

IP地址配置

所有机器都需要操作

bash 复制代码
master节点IP:192.168.136.129
node01节点IP:192.168.136.130

vim /etc/netplan/50-cloud-init.yaml
network:
  version: 2
  ethernets:
    ens33:
      addresses:
      - "192.168.136.130/24"
      routes:
      - to: "default"
        via: "192.168.136.2"
      nameservers:
        addresses: [8.8.8.8,8.8.4.4,192.168.136.2]
修改对应的IP地址,添加相关dns
配置完成重启网卡生效
重启命令:
sudo netplan apply

主机名配置

bash 复制代码
master节点
hostnamectl set-hostname master && bash
bash 复制代码
worker01节点
hostnamectl set-hostname node01 && bash

主机名与IP地址解析

master节点、node节点都需要操作

bash 复制代码
cat>>/etc/hosts << EOF
192.168.136.129 master
192.168.136.130 node01
EOF

关闭防火墙

所有机器操作

bash 复制代码
systemctl disable --now ufw

关闭swap

所有机器都操作

bash 复制代码
永久关闭swap分区
修改配置文件/etc/fstab,注释swap配置项
sed -ri 's/.*swap.*/#&/' /etc/fstab
查看是否注释
cat /etc/fstab

时间同步配置

所有节点都操作

bash 复制代码
timedatectl set-timezone Asia/Shanghai
重启时间同步服务
systemctl restart systemd-timesyncd.service

查看是否同步
timedatectl status

开启流量转发

所有节点都需要操作

bash 复制代码
开这个的原因是让各个主机承担起网络路由的角色,因为后续还要安装网络插件,要有一个路由器各个 Pod 才能互相通信。

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
EOF

配置生效
sudo sysctl --system

核查是否开启
sysctl net.ipv4.ip_forward

containerd安装配置

所有节点都需要操作

bash 复制代码
更新源
apt update

安装containerd
apt install -y containerd

查看版本
containerd -v

生成配置文件
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml

修改/etc/containerd/config.toml 配置文件
将 SystemdCgroup = false  改成  SystemdCgroup = true 
【修改这个配置是因为kubelet和底层容器运行时(我们用的是containerd)都需要对接控制组来强制为Pod和容器管理资源,并且运行时和k8s需要用同一个初始化系统,Ubuntu默认使用的初始化系统是systemd,k8s v1.22起,如果没有在KubeletConfiguration下设置cgroupDriver字段,kubeadm默认使用systemd,所以我们只需要设置containerd就行了】
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

修改/etc/containerd/config.toml 配置文件,将 sandbox_image = "registry.k8s.io/pause:3.8" 改成 sandbox_image ="registry.aliyuncs.com/google_containers/pause:3.10"  不然会导致下载失败

修改/etc/containerd/config.toml 配置文件,配置镜像加速
config_path = "/etc/containerd/certs.d"

mkdir /etc/containerd/certs.d/docker.io/ -p
echo "[host."https://pft7f97f.mirror.aliyuncs.com",host."https://registry.docker-cn.com",host."https://docker.mirrors.ustc.edu.cn"]
  capabilities = ["pull"]" > /etc/containerd/certs.d/docker.io/hosts.toml
  
修改/etc/crictl.yaml 文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

重启containerd
systemctl restart containerd

安装k8s所需软件

所有节点都要操作

bash 复制代码
1、简单介绍
kubeadm 是自动引导整个集群的工具,本质上 k8s 就是一些容器服务相互配合完成管理集群的任务,如果你知道具体安装哪些容器那么可以不用这个。
kubalet 是各个节点的总管,它上面都管,管理Pod、资源、日志、节点健康状态等等,它不是一个容器,是一个本地软件,所以必须得安装
kubectl 是命令行工具,给我们敲命令与 k8s 交互用的,必须得安装
2、安装
首先得保证源都是新的:
sudo apt update && sudo apt upgrade -y
sudo apt install -y apt-transport-https ca-certificates curl gpg
如果 /etc/apt/keyrings 目录不存在,先创建
sudo mkdir -p -m 755 /etc/apt/keyrings

下载 k8s 包仓库的公共签名密钥。解释一下,密钥中有一个v1.32,所有版本都是用的这个格式的密钥,即使你改为其他版本,下载的都是一个密钥,不影响,不过你想改也行:()
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg && \
sudo chmod 644 /etc/apt/keyrings/kubernetes-apt-keyring.gpg

添加 k8s 的apt仓库,使用其他版本的替换地址中的 v1.32 就行
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' \
| sudo tee /etc/apt/sources.list.d/kubernetes.list

安装
sudo apt update && \
sudo apt install -y kubelet kubectl kubeadm && \
sudo apt-mark hold kubelet kubeadm kubectl

启动 kubelet ,并且设置开机自启:
sudo systemctl enable --now kubelet

看看是否安装成功:
kubeadm version

重要:v1.32.6需要pause:3.8,拉取的3.10,但是3.8拉不下来,所以需要拉取其他的版本改名
ctr --namespace k8s.io image pull registry.aliyuncs.com/google_containers/pause:3.9
ctr --namespace k8s.io image tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.8

kubeadm 初始化节点

只在master上操作

bash 复制代码
这个命令会将所需的镜像提前拉取下来,然后再 init 就会快很多
sudo kubeadm config images pull --kubernetes-version=v1.32.6 --cri-socket=unix:///run/containerd/containerd.sock --image-repository=registry.aliyuncs.com/google_containers 

创建文件  /home/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "v1.32.6"
controlPlaneEndpoint: "192.168.136.130:6443"  # 设置为现有的主节点地址
apiServer:
  certSANs:
    - "192.168.136.130"  # 原有主节点
networking:
  serviceSubnet: "10.50.0.0/16"
  podSubnet: "10.60.0.0/16"
imageRepository: "registry.aliyuncs.com/google_containers"
etcd:
  local:
    dataDir: "/var/lib/etcd"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"

apiserver-advertise-address 填主节点的IP地址
control-plane-endpoint ,还记得我们在 /etc/hosts 文件中配置的映射关系吗,填主节点的地址或者主机名
kubernetes-version 版本不多说
service-cidr 这是 Service 负载均衡的网络,就是你运行了一堆容器后有一个将它们统一对外暴露的地址,并且将对它们的请求统一收集并负载均衡的网络节点,得为它配置一个网段
pod-network-cidr 每个 Pod 所在的网段
cri-socket 指定容器化环境

#初始化主节点
kubeadm init --config /home/kubeadm-config.yaml 

初始化完成后一个有 --control-plane ,一个没有,没有的那个是子节点运行的,一个子节点只要按照上面的步骤走到安装 kubelet、kubectl、kubeadm 后执行不带 --control-plane 的这部分命令就可以加入集群作为一个子节点,同样的,带 --control-plane 的是加入集群作为主节点,当真正的主节点挂后,这个加入的主节点就有可能成为主节点

#初始化后生成的密钥,其他节点加入,如果带上--control-plane 就是主节点
#主节点加入前先拉取镜像
#ctr --namespace k8s.io image pull registry.aliyuncs.com/google_containers/pause:3.9
#ctr --namespace k8s.io image tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.8
# kubeadm join 192.168.136.130:6443 --token tfkx00.vwawaawoyrlq5c88 \
        --discovery-token-ca-cert-hash sha256:b85c1ea9857ca323b4511d12063343e5f56b239e0557e6c47e627b0af05f830a \
        --control-plane

# kubeadm token create --print-join-command
 如果密钥失效,创建连接密钥
 
#配置kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pods -A # 记住,kubectl命令只能在主节点上执行,其他节点上执行会被拒绝

查看集群状态
[root@master yum.repos.d]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   ok
[root@master yum.repos.d]#
[root@master yum.repos.d]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-757cc6c8f8-8qjj8         0/1     Pending   0          3m33s
coredns-757cc6c8f8-mxfz6         0/1     Pending   0          3m33s
etcd-master                      1/1     Running   0          3m39s
kube-apiserver-master            1/1     Running   0          3m39s
kube-controller-manager-master   1/1     Running   0          3m40s
kube-proxy-jkbc8                 1/1     Running   0          3m33s
kube-scheduler-master            1/1     Running   0          3m39s
[root@master yum.repos.d]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   3m54s   v1.33.2
[root@master yum.repos.d]#

添加工作节点

bash 复制代码
该条语句直接在master上复制(在master节点执行完init之后生成的),在所有worker节点上执行
kubeadm join 192.168.136.130:6443 --token tfkx00.vwawaawoyrlq5c88 \
        --discovery-token-ca-cert-hash sha256:b85c1ea9857ca323b4511d12063343e5f56b239e0557e6c47e627b0af05f830a
        
这个命令中有一个 --token ,它是会过期的,过期时间是 24小时 ,如果 token 过期了,执行:
kubeadm token create

此外,kubeadm join命令后面的--discovery-token-ca-cert-hash如果你也没记下来的话,可以从主节点的CA证书中提取哈希值,执行:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null \
| sha256sum | awk '{print $1}'

查看节点情况

master节点操作

bash 复制代码
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   NotReady   control-plane   4m22s   v1.33.0
node01   NotReady   <none>          60s     v1.33.0
node02   NotReady   <none>          58s     v1.33.0
[root@master ~]#

[root@master ~]# kubectl get po -A
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-746c97786-bnqf4          0/1     Pending   0          4m52s
kube-system   coredns-746c97786-jjbtl          0/1     Pending   0          4m52s
kube-system   etcd-master                      1/1     Running   0          4m58s
kube-system   kube-apiserver-master            1/1     Running   0          4m59s
kube-system   kube-controller-manager-master   1/1     Running   0          4m58s
kube-system   kube-proxy-bdvbf                 1/1     Running   0          4m52s
kube-system   kube-proxy-d6sfp                 1/1     Running   0          99s
kube-system   kube-proxy-v88vj                 1/1     Running   0          97s
kube-system   kube-scheduler-master            1/1     Running   0          4m58s
[root@master ~]#

安装Kubernetes网络插件Calico

master节点操作

bash 复制代码
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/tigera-operator.yaml
【如果出错删除操作只需把create改成delete
kubectl delete -f https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/tigera-operator.yaml
】

查看是否成功
[root@master home]# kubectl get ns
NAME              STATUS   AGE
default           Active   13m
kube-node-lease   Active   13m
kube-public       Active   13m
kube-system       Active   13m
tigera-operator   Active   3m29s   ### 执行成功后创建
[root@master home]#
[root@master home]# kubectl get pods -n tigera-operator
NAME                               READY   STATUS    RESTARTS   AGE
tigera-operator-747864d56d-djjb7   1/1     Running   0          5m58s
[root@master home]#

下载配置
wget https://raw.githubusercontent.com/projectcalico/calico/v3.30.2/manifests/custom-resources.yaml

修改配置custom-resources.yaml
vi custom-resources.yaml
spec:
  # Configures Calico networking.
  registry: docker.1ms.run   ### 新增仓库地址
  calicoNetwork:
    ipPools:
    - name: default-ipv4-ippool
      blockSize: 26
      cidr: 10.60.0.0/16   ### 修改为初始化init指定的IP地址 --pod-network-cidr:10.60.0.0/16
      encapsulation: VXLANCrossSubnet
      natOutgoing: Enabled
      nodeSelector: all()


执行安装
kubectl create -f custom-resources.yaml

执行结果
root@master:/home# kubectl apply -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
goldmane.operator.tigera.io/default created
whisker.operator.tigera.io/default created
root@master:/home#

删除custom-resources.yaml创建
kubectl delete -f custom-resources.yaml

检查是否成功
[root@master home]# kubectl get ns
NAME               STATUS   AGE
calico-apiserver   Active   37s   ### 插件安装完成后生成
calico-system      Active   35s   ### 插件安装完成后生成
default            Active   20m
kube-node-lease    Active   20m
kube-public        Active   20m
kube-system        Active   20m
tigera-operator    Active   10m
[root@master home]#

root@master:/home# kubectl get pods -A
NAMESPACE          NAME                                      READY   STATUS              RESTARTS   AGE
calico-apiserver   calico-apiserver-6cbff5b8f6-kbrpw         0/1     ContainerCreating   0          3m35s
calico-apiserver   calico-apiserver-6cbff5b8f6-tvprw         0/1     ContainerCreating   0          3m35s
calico-system      calico-kube-controllers-9dcf49c5b-jmzxt   0/1     ContainerCreating   0          3m30s
calico-system      calico-node-4svnl                         0/1     Running             0          3m31s
calico-system      calico-node-c2jmz                         0/1     PodInitializing     0          3m31s
calico-system      calico-node-gdtnk                         0/1     PodInitializing     0          3m31s
calico-system      calico-typha-788475647d-8g9jm             1/1     Running             0          3m30s
calico-system      calico-typha-788475647d-h5hgz             1/1     Running             0          3m31s
calico-system      csi-node-driver-bjp44                     0/2     ContainerCreating   0          3m31s
calico-system      csi-node-driver-n2trp                     0/2     ContainerCreating   0          3m31s
calico-system      csi-node-driver-p8rs7                     0/2     ContainerCreating   0          3m31s
calico-system      goldmane-6f9658575f-mxpfn                 0/1     ContainerCreating   0          3m32s
calico-system      whisker-79b7d8f8cc-wnbpj                  0/2     ContainerCreating   0          13s
kube-system        coredns-6766b7b6bb-b7m2j                  0/1     ContainerCreating   0          159m
kube-system        coredns-6766b7b6bb-pcp5w                  0/1     ContainerCreating   0          159m
kube-system        etcd-master                               1/1     Running             0          159m
kube-system        kube-apiserver-master                     1/1     Running             0          159m
kube-system        kube-controller-manager-master            1/1     Running             0          159m
kube-system        kube-proxy-5dgz2                          1/1     Running             0          159m
kube-system        kube-proxy-75nss                          1/1     Running             0          16m
kube-system        kube-proxy-9t5tn                          1/1     Running             0          16m
kube-system        kube-scheduler-master                     1/1     Running             0          159m
tigera-operator    tigera-operator-747864d56d-jkzg9          1/1     Running             0          9m40s
root@master:/home#

如果节点初始化失败

bash 复制代码
如果这个节点初始化失败,需要重置集群中关于这个节点的东西:
kubectl drain <节点名称> --ignore-daemonsets --delete-emptydir-data # 驱逐节点上的所有 Pod
kubectl delete node <节点名称>  # 从集群中删除节点

然后到节点上执行:
sudo kubeadm reset # 执行后按 y
sudo rm -rf /etc/cni/net.d # 移除 CNI 配置

# 清除 iptables 规则
sudo iptables -F
sudo iptables -t nat -F
sudo iptables -t mangle -F
sudo iptables -t raw -F
sudo iptables -X
sudo rm -rf ~/.kube # 删除 kubeconfig 文件(根据你的具体配置路径可能有所不同)
sudo reboot # 可选,可以重启一下节点
重置后再根据情况重新初始化

安装dashboard

安装helm

bash 复制代码
wget https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz
tar xf helm-v3.13.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/
helm version

kubernetes 安装 kubernetes-dashboard 7.x

bash 复制代码
wget https://github.com/kubernetes/dashboard/releases/download/kubernetes-dashboard-7.11.1/kubernetes-dashboard-7.11.1.tgz

tar zxvf kubernetes-dashboard-7.11.1.tgz

vim kubernetes-dashboard/values.yaml

    repository: docker.io/kubernetesui/dashboard-auth
    repository: docker.io/kubernetesui/dashboard-web
    repository: docker.io/kubernetesui/dashboard-metrics-scraper
    repository: docker.io/kubernetesui/dashboard-api
====修改为====
    repository: m.daocloud.io/docker.io/kubernetesui/dashboard-auth
    repository: m.daocloud.io/docker.io/kubernetesui/dashboard-web
    repository: m.daocloud.io/docker.io/kubernetesui/dashboard-metrics-scraper
    repository: m.daocloud.io/docker.io/kubernetesui/dashboard-api
    
vim kubernetes-dashboard/charts/kong/values.yaml 
  repository: kong
====修改为====
  repository: m.daocloud.io/docker.io/kong
  
安装
helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard

[root@master home]# helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
Release "kubernetes-dashboard" does not exist. Installing it now.
NAME: kubernetes-dashboard
LAST DEPLOYED: Thu Jul  3 23:05:24 2025
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************

Congratulations! You have just installed Kubernetes Dashboard in your cluster.

To access Dashboard run:
  kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443

NOTE: In case port-forward command does not work, make sure that kong service name is correct.
      Check the services in Kubernetes Dashboard namespace using:
        kubectl -n kubernetes-dashboard get svc

Dashboard will be available at:
  https://localhost:8443
[root@master home]# 

等待所有pod运行正常后,查看svc和pod状态
[root@master home]# kubectl get pod -n kubernetes-dashboard
NAME                                                   READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-api-569fc47487-jv2lq              1/1     Running   0          3m3s
kubernetes-dashboard-auth-548d7dbb84-blk5l             1/1     Running   0          3m3s
kubernetes-dashboard-kong-5fb4d9f4d7-qs5tk             1/1     Running   0          3m3s
kubernetes-dashboard-metrics-scraper-9b9f5c9d5-wdwnh   1/1     Running   0          3m3s
kubernetes-dashboard-web-6c8f4d7666-8v4rr              1/1     Running   0          3m3s
[root@master home]#

[root@master home]# kubectl get svc -n kubernetes-dashboard
NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
kubernetes-dashboard-api               ClusterIP   10.96.100.19    <none>        8000/TCP   3m23s
kubernetes-dashboard-auth              ClusterIP   10.96.234.107   <none>        8000/TCP   3m23s
kubernetes-dashboard-kong-proxy        ClusterIP   10.96.111.82    <none>        443/TCP    3m23s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.96.69.0      <none>        8000/TCP   3m23s
kubernetes-dashboard-web               ClusterIP   10.96.68.187    <none>        8000/TCP   3m23s
[root@master home]#

创建用户和token

bash 复制代码
# 创建管理员账户  
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard  
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin  
# 获取 Token 
# 注意​​:Token 默认有效期 1 小时,可通过 --duration=24h 延长
root@master:/home# kubectl -n kubernetes-dashboard create token dashboard-admin --duration=24h
eyJhbGciOiJSUzI1NiIsImtpZCI6IlFYaHpYRnQybzVaa1F2TlFBOUw3U2FJZGtmcUdRWWh4NV9PRWxOOWQ2ZkkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzUxOTYwMDEyLCJpYXQiOjE3NTE4NzM2MTIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiYzhiYjk4MGYtZWU1YS00ZjcxLWE1MDgtOWMzN2FjZTQ5OTVkIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiI3ODkyYzUxNi0wZDUwLTQzYzktODY2MS03MTFmY2UzNTFjZjEifX0sIm5iZiI6MTc1MTg3MzYxMiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.BDEYvPwtiAJ3UpazV0CyCDcMYxoIwSiI_Bnlu8sep-CtgbbebKGHv5oysWPyaZjlSaEKEiHrqANbAabpvC3QP97eAKJZloLrGsTQxRxH3rPYaRjD6VJEWkkLkwG_Autat55WhvJmAvmmYUdtGWuPI-MSPKrqhEAAD9Mw-ToWRLzdOBP91qKDOaTiaGEJVents-aVw6RHZuNlvrJ7BmBgRHi0uWsT4f6weI6Hlv4GI6ytMn2skGKYwRKHV-pshU-czTCTnjMDGHM9y8Pj4tcQ6DVMaPEpqrdQOjcVmYA8qSFN0PAUPdx7JNdQE5yaYLtDcuBU0xDoO-e70GDyycJamw
root@master:/home#

登录web界面验证

bash 复制代码
由于默认情况下,dashboard是没有开启nodeport对外的,所以需要临时修改端口转发类型
 
 # 修改端口类型
kubectl patch svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'


# 若需要指定端口,可以指定一个固定的 nodePort,这里修改为30083:
kubectl patch svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard --type='json' -p '[{"op":"add","path":"/spec/ports/0/nodePort","value":30083}]'

如果没有指定端口,可以查看当前默认的暴露端口
kubectl get svc -n kubernetes-dashboard
root@master:/home# kubectl get svc -n kubernetes-dashboard
NAME                                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard-api               ClusterIP   10.50.224.66    <none>        8000/TCP        7m33s
kubernetes-dashboard-auth              ClusterIP   10.50.21.201    <none>        8000/TCP        7m33s
kubernetes-dashboard-kong-proxy        NodePort    10.50.132.186   <none>        443:30083/TCP   7m33s
kubernetes-dashboard-metrics-scraper   ClusterIP   10.50.22.114    <none>        8000/TCP        7m33s
kubernetes-dashboard-web               ClusterIP   10.50.237.7     <none>        8000/TCP        7m33s
root@master:/home#
由于当前已经指定了端口,所以暴露的端口为30083(443:30083/TCP)

# 访问web页面
https://192.168.136.130:30083/

输入上面生成的token(步骤 3.3 生成的token)

https://blog.csdn.net/qq_20414347/article/details/149177917

相关推荐
iru9 天前
kubectl cp详解,k8s集群与本地环境文件拷贝
运维·容器·k8s
竹君子11 天前
研发管理知识库(15)K8s和Istio关系
k8s
像风一样自由202014 天前
告别“在我电脑上能跑”:Docker入门与核心概念解析
docker·容器·k8s
Garfield200518 天前
Kubeflow 运行容器时 ENTRYPOINT 被覆盖导致环境变量未生效问题分析与解决
k8s·dockerfile·kubeflow·entrypoint
南方以南_20 天前
CKA07--Argo CD
运维·kubernetes·k8s
Matana11120 天前
CentOS7 + VMware 搭建 K3s 集群遇到的网络问题全记录与解决方案
k8s
nvd1122 天前
GKE 部署 - 从`kubectl` 到 Helm Chart 的演进
k8s
虚伪的空想家23 天前
记录次etcd故障,fatal error: bus error
服务器·数据库·k8s·etcd
岚天start25 天前
KubeSphere在线安装单节点K8S集群
docker·容器·kubernetes·k8s·kubesphere·kubekey