云原生Kubernetes: K8S 1.26版本 部署KubeSphere

目录

一、实验

1.环境

[2.K8S 1.26版本部署HELM](#2.K8S 1.26版本部署HELM)

[3.K8S 1.26版本 部署KubeSphere](#3.K8S 1.26版本 部署KubeSphere)

[4.安装KubeSphere DevOps](#4.安装KubeSphere DevOps)

二、问题

1.如何安装Zadig

2.扩展插件Zadig安装失败

[3.calico 如何实现不同node通信](#3.calico 如何实现不同node通信)

4.如何清除docker占用的磁盘空间

5.如何强制删除资源

6.namespace删除不了

7.job如何实现删除资源

8.containerd容器使用ctr命令如何实现镜像操作


一、实验

1.环境

(1)主机

表1 主机

|---------|--------------|--------|-----------------|----|
| 主机 | 架构 | 版本 | IP | 备注 |
| master1 | K8S master节点 | 1.26.0 | 192.168.204.190 | |
| node1 | K8S node节点 | 1.26.0 | 192.168.204.191 | |

(2)Termius连接

(3)master节点查看集群

bash 复制代码
1)查看node
kubectl get node
 
2)查看node详细信息
kubectl get node -o wide

2.K8S 1.26版本部署HELM

(1)查阅

bash 复制代码
https://github.com/helm/helm/tags

HELM版本与K8S集群兼容

(2)策略

当前K8S 集群为1.26.0版本,HELM 3.11.x 版本可以兼容。

所以选择3.11.0版本。

(3)下载

bash 复制代码
https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz

Termius使用SFTP传输

(4)解压

bash 复制代码
tar -zxvf helm-v3.11.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
helm version

(5)命令补全

bash 复制代码
source <(helm completion bash)

3.K8S 1.26版本 部署KubeSphere

(1)查阅

bash 复制代码
https://docs.kubesphere.com.cn/v4.0/02-quickstart/01-install-ks-core/

v1.26.x支持安装

(2)安装

bash 复制代码
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-0.4.0.tgz

完整安装过程:

bash 复制代码
[root@master1 opt]# helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-0.4.0.tgz
Release "ks-core" does not exist. Installing it now.
NAME: ks-core
LAST DEPLOYED: Wed May 22 11:57:23 2024
NAMESPACE: kubesphere-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several seconds for KubeSphere deployment to complete.

1. Make sure KubeSphere components are running:

     kubectl get pods -n kubesphere-system

2. Then you should be able to visit the console NodePort:

     Console: http://192.168.204.190:30880

3. To login to your KubeSphere console:

     Account: admin
     Password: "P@88w0rd"
     NOTE: Please change the default password after login.

For more details, please visit https://kubesphere.io.

(3)查看pod

bash 复制代码
kubectl get pods -n kubesphere-system

(4) 访问

bash 复制代码
http://192.168.204.190:30880

(5)输入初始账户及密码

bash 复制代码
账户: admin
密码: P@88w0rd

(6)修改密码

(7)进入系统

(8) 集群管理

(9)扩展中心

(10)搜索市场

关键字"CI/CD"

4.安装KubeSphere DevOps

(1) 查阅

KubeSphere 扩展市场

bash 复制代码
https://kubesphere.com.cn/marketplace/extensions/devops/

其他方式安装

bash 复制代码
https://www.kubesphere.io/zh/docs/v3.4/quick-start/minimal-kubesphere-on-k8s/

(2)同步云账户

(3)安装

KubeSphere DevOps版本支持情况

bash 复制代码
Kubernetes 版本>=1.19.0-0

KubeSphere 版本>=4.0.0-0

(4)下一步

(5)开始安装

安装中

成功

(6)下一步

(7)集群选择

(8)确认

(9)安装成功

(10)查看集群

二、问题

1.如何安装Zadig

(1) 查阅

Kubesphere扩展市场

bash 复制代码
https://kubesphere.com.cn/marketplace/extensions/zadig/

Zadig版本支持情况

bash 复制代码
Kubernetes 版本>=1.16.0-1.26.0

KubeSphere 版本>=4.0.0-0

(2)Zadig主页

bash 复制代码
https://koderover.com/zadig

(3)脚本方式安装

bash 复制代码
https://docs.koderover.com/zadig/Zadig%20v2.0.0/install/helm-deploy/

(4)Kubesphere安装Zadig

官方脚本安装

bash 复制代码
curl -LO https://github.com/koderover/zadig/releases/download/v2.0.0/install_quickstart.sh
chmod +x ./install_quickstart.sh

(8)申明变量

bash 复制代码
export IP=192.168.204.190
export PORT=30090

(9)安装

bash 复制代码
./install_quickstart.sh

这里安装过程预计持续 10 分钟左右

2.扩展插件Zadig安装失败

(1)报错

(2)原因分析

查看日志

bash 复制代码
2024-05-22T12:18:54.721997147+08:00 client.go:482: [debug] Ignoring delete failure for "zadig-post-upgrade" batch/v1, Kind=Job: jobs.batch "zadig-post-upgrade" not found

(3)解决方法

查阅相关问题

bash 复制代码
https://github.com/koderover/zadig/issues/2417

先卸载

然后重新安装

下一步

开始安装

安装中

依然报错

bash 复制代码
2024-05-22T12:39:41.303302680+08:00 upgrade.go:442: [debug] warning: Upgrade "zadig" failed: post-upgrade hooks failed: 1 error occurred:
2024-05-22T12:39:41.303305360+08:00 	* job failed: BackoffLimitExceeded
2024-05-22T12:39:41.303306374+08:00 
2024-05-22T12:39:41.303307165+08:00 
2024-05-22T12:39:41.327312921+08:00 Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred:
2024-05-22T12:39:41.327345989+08:00 	* job failed: BackoffLimitExceeded
2024-05-22T12:39:41.327352567+08:00 
2024-05-22T12:39:41.327356487+08:00 
2024-05-22T12:39:41.327363940+08:00 helm.go:84: [debug] post-upgrade hooks failed: 1 error occurred:
2024-05-22T12:39:41.327368417+08:00 	* job failed: BackoffLimitExceeded
2024-05-22T12:39:41.327371965+08:00 
2024-05-22T12:39:41.327375169+08:00 
2024-05-22T12:39:41.327380602+08:00 UPGRADE FAILED

这时可以采用脚本安装。

查看脚本(HELM为3.6.1版本)与master1节点冲突

node1节点部署

如报错显示访问超时,需要网络好时再执行

bash 复制代码
Error: UPGRADE FAILED: post-upgrade hooks failed: timed out waiting for the condition

3.calico 如何实现不同node通信

(1)查看

calico网络插件

bash 复制代码
kubectl -n kube-system get po -owide | grep calico-node

可以看到有两个容器网段分别分配给了master1和node1

bash 复制代码
kubectl get ipamblocks

(2)查看具体信息

bash 复制代码
kubectl get ipamblocks 10-244-137-64-26 -oyaml
kubectl get ipamblocks 10-244-166-128-26 -oyaml

(3)查看路由表

查看master1路由表(目标地址为10.244.66.128/26的请求会被通过网卡tunl0转发到192.168.204.191,也就是node1上;master1节点本机上的POD IP,则会直接被路由到对应的calico网卡。)

bash 复制代码
route -n

查看node1路由表(node01上也可以看到类似的路由条目,目标地址为10.244.137.64/26的请求会被通过网卡tunl0转发到192.168.204.190,也就是master1上)

bash 复制代码
route -n

4.如何清除docker占用的磁盘空间

(1)查询

master1节点查看docker占用磁盘空间

bash 复制代码
docker system df

docker system df -v

node1节点查看docker占用磁盘空间

(2)清除

master1节点仅删除停止运行的容器

bash 复制代码
docker container prune

noder1节点仅删除停止运行的容器

5.如何强制删除资源

(1)删除pod

命令

bash 复制代码
kubectl delete pod <your-pod-name> -n <name-space> --force --grace-period=0

删除

bash 复制代码
kubectl delete --all pods -n zadig  --force --grace-period=0
bash 复制代码
kubectl delete --all pods -n argocd  --force --grace-period=0
bash 复制代码
kubectl delete --all pods -n extension-devops  --force --grace-period=0
bash 复制代码
kubectl delete --all pods -n extension-zadig  --force --grace-period=0

(2)删除pv、pvc

bash 复制代码
kubectl patch pv xxx -p '{"metadata":{"finalizers":null}}'
kubectl patch pvc xxx -p '{"metadata":{"finalizers":null}}'

6.namespace删除不了

(1)报错

kubesphere-devops-system 一直为Terminating

(2)原因分析

选择一个Terminating namespace,并查看namespace 中的finalizer。

bash 复制代码
kubectl get namespace kubesphere-devops-system -o yaml

显示如下:

bash 复制代码
spec:
  finalizers:
  - kubernetes

(3)解决方法

导出json格式到文件

bash 复制代码
kubectl get namespace kubesphere-devops-system  -o json >tmp.json

编辑tmp.josn,删除finalizers 字段的值

bash 复制代码
 25     "spec": {                    #从此行开始删除
 26         "finalizers": [
 27             "kubernetes"
 28         ]
 29     },                           # 删到此行

开启proxy(执行该命令后,当前终端会被卡住)

bash 复制代码
kubectl proxy

打开新的一个窗口,执行以下命令

bash 复制代码
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/kubesphere-devops-system/finalize

确认处于Terminating 状态的namespace已经被删除

7.job如何实现删除资源

(1)查看状态

目前为已完成

(2)查看YAML

bash 复制代码
kind: Job
apiVersion: batch/v1
metadata:
  name: devops-post-delete
  namespace: extension-devops
  labels:
    controller-uid: 0e83e553-482f-4755-834a-9c0f07d4c5b9
    job-name: devops-post-delete
  annotations:
    batch.kubernetes.io/job-tracking: ''
    helm.sh/hook: post-delete
    helm.sh/hook-delete-policy: 'before-hook-creation,hook-succeeded'
    helm.sh/hook-weight: '1'
    revisions: >-
      {"1":{"status":"completed","succeed":1,"desire":1,"uid":"0e83e553-482f-4755-834a-9c0f07d4c5b9","start-time":"2024-05-22T15:52:49+08:00","completion-time":"2024-05-22T16:22:50+08:00"}}
spec:
  parallelism: 1
  completions: 1
  backoffLimit: 6
  selector:
    matchLabels:
      controller-uid: 0e83e553-482f-4755-834a-9c0f07d4c5b9
  template:
    metadata:
      creationTimestamp: null
      labels:
        controller-uid: 0e83e553-482f-4755-834a-9c0f07d4c5b9
        job-name: devops-post-delete
    spec:
      containers:
        - name: post-delete-job
          image: 'kubesphere/kubectl:v1.27.4'
          command:
            - /bin/bash
            - '-c'
            - |
              if kubectl get ns argocd; then
                kubectl delete ns argocd
              fi
              if kubectl get ns kubesphere-devops-system; then
                kubectl delete ns kubesphere-devops-system
              fi
              if kubectl get ns kubesphere-devops-worker; then
                kubectl delete ns kubesphere-devops-worker
              fi
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Never
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  completionMode: NonIndexed
  suspend: false

8.containerd容器使用ctr命令如何实现镜像操作

(1)帮助命令

bash 复制代码
[root@node1 ~]# ctr images --help
NAME:
   ctr images - manage images

USAGE:
   ctr images command [command options] [arguments...]

COMMANDS:
   check                    check existing images to ensure all content is available locally
   export                   export images
   import                   import images
   list, ls                 list images known to containerd
   mount                    mount an image to a target path
   unmount                  unmount the image from the target
   pull                     pull an image from a remote
   push                     push an image to a remote
   delete, del, remove, rm  remove one or more images by reference
   tag                      tag an image
   label                    set and clear labels for an image
   convert                  convert an image

OPTIONS:
   --help, -h  show help

(2)拉取

bash 复制代码
ctr images pull ghcr.io/dexidp/dex:v2.30.2

(3) 查看

bash 复制代码
crictl images list
或
ctr images list
或
ctr i ls -q

(4)导出

bash 复制代码
ctr image export  dev.tar.gz  ghcr.io/dexidp/dex:v2.30.2

(5)删除

bash 复制代码
1)查询
ctr image list | grep  ghcr.io/dexidp/dex:v2.30.2

2)删除
ctr image delete  ghcr.io/dexidp/dex:v2.30.2

(6)导入

bash 复制代码
1)导入
ctr image import  dev.tar.gz

2)查询
ctr image list | grep  ghcr.io/dexidp/dex:v2.30.2
相关推荐
weixin_453965002 小时前
[单master节点k8s部署]31.ceph分布式存储(二)
分布式·ceph·kubernetes
大G哥5 小时前
记一次K8S 环境应用nginx stable-alpine 解析内部域名失败排查思路
运维·nginx·云原生·容器·kubernetes
feng_xiaoshi5 小时前
【云原生】云原生架构的反模式
云原生·架构
妍妍的宝贝5 小时前
k8s 中微服务之 MetailLB 搭配 ingress-nginx 实现七层负载
nginx·微服务·kubernetes
大道归简5 小时前
Docker 命令从入门到入门:从 Windows 到容器的完美类比
windows·docker·容器
爱跑步的程序员~6 小时前
Docker
docker·容器
福大大架构师每日一题6 小时前
23.1 k8s监控中标签relabel的应用和原理
java·容器·kubernetes
程序那点事儿6 小时前
k8s 之动态创建pv失败(踩坑)
云原生·容器·kubernetes
疯狂的大狗7 小时前
docker进入正在运行的容器,exit后的比较
运维·docker·容器
长天一色7 小时前
【Docker从入门到进阶】01.介绍 & 02.基础使用
运维·docker·容器