使用sealos部署kubernetes集群并实现集群管理

使用sealos部署kubernetes集群并实现集群管理

本次使用5台主机完成,其中3台主机为master节点,1台主机为worker节点,一台主机作为kuboard节点。

一、主机准备

1.1 配置主机名

bash 复制代码
# hostnamectl set-hostname xxx

k8s-master01
k8s-master02
k8s-master03
k8s-worker01

1.2 设置静态IP地址

序号 主机名 主机IP
1 k8s-master01 192.168.95.142
2 k8s-master02 192.168.95.143
3 k8s-master03 192.168.95.144
4 k8s-worker01 192.168.95.145
bash 复制代码
# vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ec87533a-8151-4aa0-9d0f-1e970affcdc6"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="192.168.95.xxx"
PREFIX="24"
GATEWAY="192.168.95.2"

1.3 配置主机名与IP地址解析

下面解析是管理员添加,sealos在运行过程中,也会自动添加主机名与IP地址解析关系。

bash 复制代码
# /etc/hosts
192.168.95.142 k8s-master01
192.168.95.143 k8s-master02
192.168.95.144 k8s-master03
192.168.95.145 k8s-worker01
192.168.95.146 nfsserver

1.4 升级内核

参考
centos7停服yum更新kernel失败解决办法

二、sealos准备(在k8s-master01 执行即可)

安装jq

bash 复制代码
[root@k8s-master01 ~]# sudo yum install epel-release

[root@k8s-master01 ~]# sudo yum install jq

[root@k8s-master01 ~]# curl --silent "https://api.github.com/repos/labring/sealos/releases" | jq -r '.[].tag_n                                                             ame'                                                           
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

下载 Sealos 命令行工具

你可以通过运行命令来获取版本列表:

bash 复制代码
curl --silent "https://api.github.com/repos/labring/sealos/releases" | jq -r '.[].tag_name'

注意:在选择版本时,建议使用稳定版本例如 v4.3.0。像 v4.3.0-rc1、v4.3.0-alpha1 这样的版本是预发布版,请谨慎使用。

设置 VERSION 环境变量为 latest 版本号,或者将 VERSION 替换为您要安装的 Sealos 版本:

bash 复制代码
VERSION=`curl -s https://api.github.com/repos/labring/sealos/releases/latest | grep -oE '"tag_name": "[^"]+"' | head -n1 | cut -d'"' -f4`

另外由于国内网络的特殊原因,访问 GitHub 可能会受限,建议先到以下几个网站寻找最新可用的 GitHub 代理:

https://ghproxy.link/

https://ghproxy.net/

找到可用的 GitHub 代理之后,将其设置为环境变量 PROXY_PREFIX,例如:

bash 复制代码
export PROXY_PREFIX=https://ghfast.top

二进制自动下载

bash 复制代码
curl -sfL ${PROXY_PREFIX}/https://raw.githubusercontent.com/labring/sealos/main/scripts/install.sh | PROXY_PREFIX=${PROXY_PREFIX} sh -s ${VERSION} labring/sealos
bash 复制代码
# sealos version
SealosVersion:
  buildDate: "2024-10-09T02:18:27Z"
  compiler: gc
  gitCommit: 2b74a1281
  gitVersion: 5.0.1
  goVersion: go1.20.14
  platform: linux/amd64

三、使用sealos部署kubernetes集群

kubernetes集群默认使用containerd

passwd 接主机的密码,做主机互信后可不加

bash 复制代码
sealos run labring/kubernetes:v1.24.0 labring/calico:v3.22.1 --masters 192.168.95.142,192.168.95.143,192.168.95.144 --nodes 192.168.95.145 --passwd xxxx
bash 复制代码
# kubectl get nodes
NAME           STATUS   ROLES           AGE   VERSION
k8s-master01   Ready    control-plane   16h   v1.24.0
k8s-master02   Ready    control-plane   16h   v1.24.0
k8s-master03   Ready    control-plane   16h   v1.24.0
k8s-worker01   Ready    <none>          16h   v1.24.0
bash 复制代码
# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS      AGE
coredns-6d4b75cb6d-fmcmc               1/1     Running   0             20m
coredns-6d4b75cb6d-kdd9d               1/1     Running   0             20m
etcd-k8s-master01                      1/1     Running   0             20m
etcd-k8s-master02                      1/1     Running   0             19m
etcd-k8s-master03                      1/1     Running   0             19m
kube-apiserver-k8s-master01            1/1     Running   0             20m
kube-apiserver-k8s-master02            1/1     Running   1 (19m ago)   19m
kube-apiserver-k8s-master03            1/1     Running   0             18m
kube-controller-manager-k8s-master01   1/1     Running   1 (16m ago)   20m
kube-controller-manager-k8s-master02   1/1     Running   0             18m
kube-controller-manager-k8s-master03   1/1     Running   0             17m
kube-proxy-jl2gx                       1/1     Running   0             20m
kube-proxy-kzn5g                       1/1     Running   0             19m
kube-proxy-pscn2                       1/1     Running   0             20m
kube-proxy-pvfw9                       1/1     Running   0             18m
kube-scheduler-k8s-master01            1/1     Running   1 (16m ago)   20m
kube-scheduler-k8s-master02            1/1     Running   0             19m
kube-scheduler-k8s-master03            1/1     Running   0             19m
kube-sealos-lvscare-k8s-worker01       1/1     Running   0             18m

四、使用kuboard实现k8s集群托管

序号 主机名 主机IP
1 nfsserver 192.168.95.146

4.1 kuboard部署及访问

bash 复制代码
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce

systemctl enable --now docker

docker run -d   --restart=unless-stopped   --name=kuboard   -p 80:80/tcp   -p 10081:10081/tcp   -e KUBOARD_ENDPOINT="http://192.168.95.146:80"   -e KUBOARD_AGENT_SERVER_TCP_PORT="10081"   -v /root/kuboard-data:/data   eipwork/kuboard:v3

用户名和密码分别为:admin及Kuboard123

## 4.2 kuboard添加k8s集群



bash 复制代码
[root@k8s-master01 ~]# curl -k 'http://192.168.95.146:80/kuboard-api/cluster/k8s1-sealos/kind/KubernetesCluster/k8s1-sealos/resource/installAgentToKubernetes?token=YpFt04uBrcHdPvuqkjDsjizm0cDQpFTS' > kuboard-agent.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  5613    0  5613    0     0  1203k      0 --:--:-- --:--:-- --:--:-- 1370k
[root@k8s-master01 ~]# kubectl apply -f ./kuboard-agent.yaml
namespace/kuboard created
serviceaccount/kuboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-admin-crb created
serviceaccount/kuboard-viewer created
clusterrolebinding.rbac.authorization.k8s.io/kuboard-viewer-crb created
deployment.apps/kuboard-agent-hpbezk created
deployment.apps/kuboard-agent-hpbezk-2 created
[root@k8s-master01 ~]# cat kuboard-agent.yaml
---
apiVersion: v1
kind: Namespace
metadata:
  name: kuboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kuboard-admin
  namespace: kuboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kuboard-admin-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kuboard-admin
  namespace: kuboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kuboard-viewer
  namespace: kuboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kuboard-viewer-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- kind: ServiceAccount
  name: kuboard-viewer
  namespace: kuboard

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    k8s.kuboard.cn/ingress: "false"
    k8s.kuboard.cn/service: none
    k8s.kuboard.cn/workload: kuboard-agent-hpbezk
  labels:
    k8s.kuboard.cn/name: kuboard-agent-hpbezk
  name: kuboard-agent-hpbezk
  namespace: kuboard
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s.kuboard.cn/name: kuboard-agent-hpbezk
  template:
    metadata:
      labels:
        k8s.kuboard.cn/name: kuboard-agent-hpbezk
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: Exists
              weight: 100
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    k8s.kuboard.cn/name: kuboard-v3
                namespaces:
                  - kuboard
                topologyKey: kubernetes.io/hostname
              weight: 100
      serviceAccountName: kuboard-admin
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
      containers:
        - env:
            - name: KUBOARD_ENDPOINT
              value: "http://192.168.95.146:80"
            - name: KUBOARD_AGENT_HOST
              value: "192.168.95.146"
            - name: KUBOARD_AGENT_PORT
              value: "10081"
            - name: KUBOARD_AGENT_REMOTE_PORT
              value: "35001"
            - name: KUBOARD_AGENT_PROTOCOL
              value: "tcp"
            - name: KUBOARD_AGENT_PROXY
              value: ""
            - name: KUBOARD_K8S_CLUSTER_NAME
              value: "k8s1-sealos"
            - name: KUBOARD_AGENT_KEY
              value: "32b7d6572c6255211b4eec9009e4a816"
            - name: KUBERNETES_TOKEN_NAME
              value: "kuboard-admin"
            - name: KUBOARD_ANONYMOUS_TOKEN
              value: "YpFt04uBrcHdPvuqkjDsjizm0cDQpFTS"
            - name: POD_HOST_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
          image: "eipwork/kuboard-agent:v3"
          imagePullPolicy: Always
          name: kuboard-agent
      restartPolicy: Always

---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    k8s.kuboard.cn/ingress: "false"
    k8s.kuboard.cn/service: none
    k8s.kuboard.cn/workload: kuboard-agent-hpbezk-2
  labels:
    k8s.kuboard.cn/name: kuboard-agent-hpbezk-2
  name: kuboard-agent-hpbezk-2
  namespace: kuboard
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s.kuboard.cn/name: kuboard-agent-hpbezk-2
  template:
    metadata:
      labels:
        k8s.kuboard.cn/name: kuboard-agent-hpbezk-2
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - preference:
                matchExpressions:
                  - key: node-role.kubernetes.io/master
                    operator: Exists
              weight: 100
        podAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - podAffinityTerm:
                labelSelector:
                  matchLabels:
                    k8s.kuboard.cn/name: kuboard-v3
                namespaces:
                  - kuboard
                topologyKey: kubernetes.io/hostname
              weight: 100
      serviceAccountName: kuboard-viewer
      tolerations:
        - effect: NoSchedule
          key: node-role.kubernetes.io/master
          operator: Exists
      containers:
        - env:
            - name: KUBOARD_ENDPOINT
              value: "http://192.168.95.146:80"
            - name: KUBOARD_AGENT_HOST
              value: "192.168.95.146"
            - name: KUBOARD_AGENT_PORT
              value: "10081"
            - name: KUBOARD_AGENT_REMOTE_PORT
              value: "35001"
            - name: KUBOARD_AGENT_PROTOCOL
              value: "tcp"
            - name: KUBOARD_AGENT_PROXY
              value: ""
            - name: KUBOARD_K8S_CLUSTER_NAME
              value: "k8s1-sealos"
            - name: KUBOARD_AGENT_KEY
              value: "32b7d6572c6255211b4eec9009e4a816"
            - name: KUBERNETES_TOKEN_NAME
              value: "kuboard-viewer"
            - name: KUBOARD_ANONYMOUS_TOKEN
              value: "YpFt04uBrcHdPvuqkjDsjizm0cDQpFTS"
            - name: POD_HOST_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
          image: "eipwork/kuboard-agent:v3"
          imagePullPolicy: Always
          name: kuboard-agent
      restartPolicy: Always
[root@k8s-master01 ~]# kubectl get pods -n kuboard
NAME                                      READY   STATUS    RESTARTS   AGE
kuboard-agent-hpbezk-2-74c7c76988-n84gh   1/1     Running   0          73s
kuboard-agent-hpbezk-6959ddfb74-65q29     1/1     Running   0          73s

选第一个

相关推荐
长勺1 小时前
Java云原生到底是啥,有哪些技术
java·开发语言·云原生
alden_ygq1 小时前
金丝雀/灰度/蓝绿发布的详解
云原生·容器·kubernetes·devops
云攀登者-望正茂2 小时前
解锁 DevOps 新境界 :使用 Flux 进行 GitOps 现场演示 – 自动化您的 Kubernetes 部署
kubernetes·devops
sunywz3 小时前
微服务不注册到nacos的方法
微服务·云原生·架构
alden_ygq3 小时前
Kubernetes排错(十)-常见网络故障排查
云原生·容器·kubernetes
zhojiew4 小时前
service mesh的定制化与性能考量
java·云原生·service_mesh
zxy984 小时前
Docker、Docker-compose、K8s、Docker swarm之间的区别
docker·kubernetes
alden_ygq5 小时前
K8S服务的请求访问转发原理
云原生·容器·kubernetes
Mr_wilson_liu6 小时前
k8s删除pv和pvc后,vg存储没释放分析
云原生·容器·kubernetes
alden_ygq7 小时前
K8S Svc Port-forward 访问方式
云原生·容器·kubernetes