k8s部署springcloud-alibaba项目

本文由个人总结,如需转载使用请标明原著及原文地址

本文需要一些知识储备,有一定的自学能力,有一定的自行解决问题的能力,不然直接看的话压力会比较大,建议有一定知识储备后作为提升来学

本文的前置条件是会docker,还要有两台以上的虚拟机,还不了解的可以先看我前一篇文章

centos8安装docker运行java文件_centos8 docker安装java8-CSDN博客

本文是我在完整搭建完整个系统后再这里的,所以有些步骤和当时排查的问题可能会有些遗忘,请谅解,有问题的部分可以在评论提问,我会回答并更新到文章中

本文涉及的技术有

k8s,flannel,nfs,mysql,nginx

nacos,sentinel,seata这几个是springcloud-alibaba的,如果不是部这种项目,这3个的部分可以跳过

我这使用的是springboot 2.7.18、spring-cloud 2021.0.9、spring-cloud-alibaba 2021.0.5.0、nacos 2.2.3、sentinel 1.8.6、seata1.6.1,如果你的版本和我不一样,自行去spring-cloud-alibaba官网查下你自己匹配的版本,下面部署的时候也对应的记得修改版本,不要就直接抄,版本不对可能会导致项目起不来

1. 安装k8s

我这弄的两台虚拟机的配置是

|-----------|----------------|-----------|
| 服务器名 | ip | 配置 |
| k8smaster | 192.168.233.20 | 2G内存20G硬盘 |
| k8snode1 | 192.168.233.21 | 8G内存20G硬盘 |

node1的内存比较高是因为原本是设置2G内存,在部署到nacos的时候node1就因为内存不够而卡死了,然后加到了4G,然后我springcloud-alibaba部署了7个服务,又卡死了,所以加到了8G内存,现在完整部署完,node1实际使用内存4.7G,各位可以看自己情况调整

服务器名称可以通过以下命令进行修改

bash 复制代码
hostnamectl set-hostname xxxx

1.1 关闭防火墙、selinux、swap

master服务器和node服务器都要执行

关闭防火墙,k8s会默认使用iptables做访问限制,防火墙开着可能会阻止k8s集群中各节点的互相访问

bash 复制代码
systemctl stop firewalld && systemctl disable firewalld

关闭selinux

SELinux是一种安全子系统,它提供访问控制机制。在Kubernetes集群中,某些节点可能运行在CPU或内存受限的环境中,或者需要运行特定的安全策略。在这些情况下,可能需要配置SELinux以允许Kubernetes正常工作。但是,完全关闭SELinux(设置为permissive或disabled模式)通常用于简化安全策略,以避免潜在的安全问题。

bash 复制代码
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

关闭swap

swap是linux系统的虚拟内存,k8s觉得虚拟内存会影响他的执行效率,所以k8s启动时会检测swap是否关闭,没关不让启动,但是启动参数里可以加参数在启动时忽略判断swap是否关闭,有兴趣可以自己查下,我这介绍按k8s关闭swap的搭建方法

bash 复制代码
sed -ri 's/.*swap.*/#&/' /etc/fstab

1.2 安装docker

master服务器和node服务器都要执行

centos8安装docker运行java文件_centos8 docker安装java8-CSDN博客

1.3 安装 kubelet、kubeadm、kubectl

master服务器和node服务器都要执行

kubeadm:搭建kube集群的东西

kubelet:kube中和docker交互创建对应容器的服务,旧版的kube-apiserver、kube-proxy、kube-scheduler等,那些的都会在kubelet里自动创建

kubectl:kube的控制台

我这使用的是1.18.0版本,如有需要自行修改

bash 复制代码
yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0

1.4 创建master节点

仅master节点执行

执行前请确保kubelet服务已经启动完毕,以下命令能查询kubelet的状态

bash 复制代码
systemctl status kubelet
bash 复制代码
 kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2024-09-09 17:39:32 CST; 1 day 21h ago
     Docs: https://kubernetes.io/docs/
 Main PID: 798 (kubelet)
    Tasks: 19 (limit: 11363)
   Memory: 77.5M
   CGroup: /system.slice/kubelet.service
           └─798 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver>

Sep 11 15:31:10 k8smaster kubelet[798]: E0911 15:31:10.396187     798 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgro>
Sep 11 15:31:20 k8smaster kubelet[798]: E0911 15:31:20.420096     798 summary_sys_containers.go:47] Failed to get system container stats for "/system.slice/docker.service": failed to get cgro>

以下命令查看docker中是否创建好kube基础依赖的容器

bash 复制代码
docker ps

这里随便cp了2个出来,有看到这俩,或者类似这俩的,应该就行了

bash 复制代码
CONTAINER ID   IMAGE                                               COMMAND                  CREATED        STATUS        PORTS                                       NAMES
d8cfa18ec556   registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 46 hours ago   Up 46 hours                                               k8s_POD_kube-apiserver-k8smaster_kube-system_24d2e38ac3ee7dbd192b59ae301f2958_1
59ca2ec352a8   43940c34f24f                                        "/usr/local/bin/kube..."   46 hours ago   Up 46 hours                                               k8s_kube-proxy_kube-proxy-jvp78_kube-system_2c89a685-e9d1-490a-b945-13a38eca665d_1

apiserver-advertise-address:当前服务器的静态ip,装好docker和kubelet,kubelet会自动在docker里运行kube-apiserver所以填当前服务器地址

倒数两行是service和pod的ip网段,最好不要修改,这个网段和flannel的默认值是匹配的,如果修改到时候flannel也要修改

bash 复制代码
kubeadm init \
--v=5 \
--apiserver-advertise-address=192.168.233.20 \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.18.0 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16

输出以下内容master节点就算搭建成功了

失败看1.6

bash 复制代码
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.233.20:6443 --token 1aczra.6ttwnx7vkcvfvr26 \
    --discovery-token-ca-cert-hash sha256:85feeb8ca8b4a9161446732118ca87f5f995bcdcf3f13d4711cf9aff4d50360e

然后按照上面的提示,在master节点上执行这三行的内容,这是赋予本机的kubectl操作权限

bash 复制代码
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.5 node节点加入集群

仅node节点执行

在master节点搭建完的时候,最下面会输出这段代码,直接复制到node节点上执行就能作为node节点加入到集群

bash 复制代码
kubeadm join 192.168.233.20:6443 --token 1aczra.6ttwnx7vkcvfvr26 \
    --discovery-token-ca-cert-hash sha256:85feeb8ca8b4a9161446732118ca87f5f995bcdcf3f13d4711cf9aff4d50360e

如果没记下来忘了,那么在master服务器上输入以下命令,可以再次输出加入k8s集群的代码

bash 复制代码
kubeadm token create --print-join-command

1.6 master节点创建失败的情况

仅master节点执行

先去排查kubelet服务是否启动成功,对应的docker容器是否创建了,1.4有查看的方法,根据报错信息去网上查解决方法

解决kubelet的问题后,要重新使用kubeadm init之前,先要重置kubeadm并删除上次init产生的垃圾文件

重置kubeadm

bash 复制代码
kubeadm reset

删除垃圾文件

bash 复制代码
rm -rf /etc/kubernetes/*
rm -rf $HOME/.kube/*

1.7 k8s部署flannel

仅master节点执行

到这一步,使用以下命令可以看到,我们的master节点还是处于NotReady状态,这时需要部署flannel来自动为创建的service和pod分配ip

bash 复制代码
kubectl get node

我们直接下载k8s部署flannel的yml文件,然后用kubectl create命令执行

bash 复制代码
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml

我下的这版flannel会单独创建个namespace运行他的内容,用以下命令可以查看flannel pod的创建情况,创建完了后,再用kubectl get node 此时master节点的状态应该就变成Ready了

bash 复制代码
kubectl get pods -n kube-flannel

2. 部署nfs

nfs是文件共享服务,docker容器运行时,我们常把一些容器中的目录映射到宿主机的目录上,方便修改配置文件、查看日志等,但是使用了k8s我们可能会有很多的node节点,我们去每个服务器上做配置、看日志非常不方便,所以就用到了nfs文件共享服务,我们可以把所有node节点的容器目录映射到文件共享服务器上,这样我们修改配置文件或查看日志等操作,就可以在文件共享服务器上统一操作了,不用到各个node服务器上操作

2.1 搭建nfs服务

nfs服务搭建非常简单,网上普遍是直接在宿主机上搭建,我这使用的是用docker搭建的方法,你们不想用docker也能搜搜直接在宿主机上搭建的方法,也就两三步的事

nfs服务可以搭建在单独的服务器上,我这为了省事直接搭在master服务器上

下载nfs镜像

bash 复制代码
docker pull itsthenetwork/nfs-server-alpine

运行nfs容器,这是24年9月的运行方式,旧版的共享目录的全局变量不是NFS_EXPORTED_DIR是SHARE_DIR_NAME,docker ps 如果看到容器一直在重启,可以用docker logs nfs-server查看容器启动失败的原因,如果是缺少全局变量会直接把缺少的参数名输出出来,看参数名推测下该参数的用处,加在启动命令里就好了

bash 复制代码
docker run --privileged -d --name nfs-server --restart always -v /data/nfs:/nfs/shared -p 2049:2049 -e NFS_EXPORTED_DIR="/nfs/shared" itsthenetwork/nfs-server-alpine

2.2 搭建k8s nfs-client-provisioner

仅master节点执行

k8s创建statefulsets类型的服务时需要提供pvc作为存储空间,nfs-client-provisioner可以通过自动在创建映射到nfs服务的pvc供statefulsets使用

bash 复制代码
kubectl create -f nfs-deployment.yaml

nfs-deployment.yaml

javascript 复制代码
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs # 可改后面会用到
            - name: NFS_SERVER
              value: 192.168.233.20 # 搭建了nfs的服务器ip
            - name: NFS_PATH
              value: /nfs-client-provisioner # 存储pvc数据的主目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.233.20 # 搭建了nfs的服务器ip
            path: /nfs-client-provisioner # 存储pvc数据的主目录
bash 复制代码
kubectl create -f nfs-class.yaml

nfs-class.yaml

javascript 复制代码
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # deployment的PROVISIONER_NAME
parameters:
  archiveOnDelete: "false"
bash 复制代码
kubectl create -f nfs-rbac.yaml

nfs-rbac.yaml

bash 复制代码
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
  resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3. 部署mysql

3.1 k8s部署mysql8

仅master节点执行

bash 复制代码
kubectl create -f mysql-pro.yml

mysql-pro.yml

javascript 复制代码
apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql8-cnf
data:
  my.cnf: |
    [mysqld]
    # mysql的配置文件
    # Remove leading # and set to the amount of RAM for the most important data
    # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
    # innodb_buffer_pool_size = 128M
    #
    # Remove leading # to turn on a very important data integrity option: logging
    # changes to the binary log between backups.
    # log_bin
    #
    # Remove leading # to set options mainly useful for reporting servers.
    # The server defaults are faster for transactions and fast SELECTs.
    # Adjust sizes as needed, experiment to find the optimal values.
    # join_buffer_size = 128M
    # sort_buffer_size = 2M
    # read_rnd_buffer_size = 2M
    
    host-cache-size=0
    skip-name-resolve
    datadir=/var/lib/mysql
    socket=/var/run/mysqld/mysqld.sock
    secure-file-priv=/var/lib/mysql-files
    user=mysql
    mysql_native_password=ON
    
    pid-file=/var/run/mysqld/mysqld.pid
    [client]
    socket=/var/run/mysqld/mysqld.sock
    
    !includedir /etc/mysql/conf.d/
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql-pro
  labels:
    name: mysql-pro
spec:
  replicas: 1
  selector:
    name: mysql-pro
  template:
    metadata:
      labels:
        name: mysql-pro
    spec:
      containers:
      - name: mysql-pro
        image: mysql:8
        ports:
        - containerPort: 3306
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql # 映射mysql目录实现持久化
        - name: mysql-mycnf
          mountPath: /etc/my.cnf # 映射配置文件
          subPath: "my.cnf"
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: "123456" # root的密码
# 如果只创建1个数据库1个mysql账号,可以配置这几个参数,比较省事,如果按我的步骤做的话,删掉
#        - name: MYSQL_DATABASE
#          value: "demo2"
#        - name: MYSQL_USER
#          value: "demo2"
#        - name: MYSQL_PASSWORD
#          value: "123456"
      volumes:
      - name: mysql-data # 和volumeMounts的name要对应
        nfs:
          server: 192.168.233.20 # mysql数据储存的映射,用的nfs服务器,通过nfs服务器的映射,记得都要先创建好对应的目录
          path: /data/mysql-pro
      - name: mysql-mycnf
        configMap:
          name: mysql8-cnf # 配置文件用configMap的形式映射,就是最上面那个configMap
---
apiVersion: v1
kind: Service
metadata:
  name: mysql-pro
  labels:
    name: mysql-pro
spec:
  ports:
  - port: 3306
    targetPort: 3306
    nodePort: 30004 # 仅type: NodePort才加,范围是30000-32767,和别的service不要重复
  selector:
    name: mysql-pro
  type: NodePort # 默认是ClusterIP,集群外无法访问,NodePort会提供外网访问的端口

3.2 创建nacos要用的数据库

仅master节点执行

nacos的储存方式可以使用file,我这使用的是数据库储存,以此为例,你们能自己创建需要的mysql账号、数据库、还有赋权

mysql创建好后用以下命令查看mysql pod的名字

bash 复制代码
kubectl get pods

显示的结果

bash 复制代码
NAME                                             READY   STATUS    RESTARTS   AGE
mysql-pro-twkc9                                  1/1     Running   2          2d4h
nfs-client-provisioner-5864f57757-vc9g4          1/1     Running   2          2d4h

进入到mysql容器中

bash 复制代码
kubectl exec -it mysql-pro-twkc9 /bin/bash

进去后左侧会变成bash-5.1#

输入mysql -uroot -p 回车,然后输入3.1yml里配置的root用户的密码,就能进到mysql了

然后创建数据库,创建数据库用户,给数据库赋予数据库权限,这些就是mysql的一些基础操作了,我就不过多叙述了,其中要注意的是,我用的镜像是mysql8的默认密码使用的是caching_sha2_password进行加密,会导致navicat连不上mysql,所以创建用户时指定密码使用mysql_native_password类型,这步操作完就可以正常使用navicat连进去了,剩下的操作就可以在你熟悉的navicat里进行了

创建nacos数据库所需的表

sql 复制代码
/* https://github.com/alibaba/nacos/blob/master/distribution/conf/nacos-mysql.sql */

CREATE TABLE IF NOT EXISTS `config_info` (
    `id`           bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
    `data_id`      varchar(255) NOT NULL COMMENT 'data_id',
    `group_id`     varchar(255)          DEFAULT NULL,
    `content`      longtext     NOT NULL COMMENT 'content',
    `md5`          varchar(32)           DEFAULT NULL COMMENT 'md5',
    `gmt_create`   datetime     NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
    `gmt_modified` datetime     NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
    `src_user`     text COMMENT 'source user',
    `src_ip`       varchar(50)           DEFAULT NULL COMMENT 'source ip',
    `app_name`     varchar(128)          DEFAULT NULL,
    `tenant_id`    varchar(128)          DEFAULT '' COMMENT '租户字段',
    `c_desc`       varchar(256)          DEFAULT NULL,
    `c_use`        varchar(64)           DEFAULT NULL,
    `effect`       varchar(64)           DEFAULT NULL,
    `type`         varchar(64)           DEFAULT NULL,
    `c_schema`     text,
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_configinfo_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info';

CREATE TABLE IF NOT EXISTS `config_info_aggr` (
    `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
    `data_id` varchar(255) NOT NULL COMMENT 'data_id',
    `group_id` varchar(255) NOT NULL COMMENT 'group_id',
    `datum_id` varchar(255) NOT NULL COMMENT 'datum_id',
    `content` longtext NOT NULL COMMENT '内容',
    `gmt_modified` datetime NOT NULL COMMENT '修改时间',
    `app_name` varchar(128) DEFAULT NULL,
    `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_configinfoaggr_datagrouptenantdatum` (`data_id`,`group_id`,`tenant_id`,`datum_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='增加租户字段';

CREATE TABLE IF NOT EXISTS `config_info_beta` (
    `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
    `data_id` varchar(255) NOT NULL COMMENT 'data_id',
    `group_id` varchar(128) NOT NULL COMMENT 'group_id',
    `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
    `content` longtext NOT NULL COMMENT 'content',
    `beta_ips` varchar(1024) DEFAULT NULL COMMENT 'betaIps',
    `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
    `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
    `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
    `src_user` text COMMENT 'source user',
    `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
    `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_configinfobeta_datagrouptenant` (`data_id`,`group_id`,`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_beta';

CREATE TABLE IF NOT EXISTS `config_info_tag` (
   `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
   `data_id` varchar(255) NOT NULL COMMENT 'data_id',
   `group_id` varchar(128) NOT NULL COMMENT 'group_id',
   `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
   `tag_id` varchar(128) NOT NULL COMMENT 'tag_id',
   `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
   `content` longtext NOT NULL COMMENT 'content',
   `md5` varchar(32) DEFAULT NULL COMMENT 'md5',
   `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
   `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
   `src_user` text COMMENT 'source user',
   `src_ip` varchar(50) DEFAULT NULL COMMENT 'source ip',
   PRIMARY KEY (`id`),
   UNIQUE KEY `uk_configinfotag_datagrouptenanttag` (`data_id`,`group_id`,`tenant_id`,`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_info_tag';

CREATE TABLE IF NOT EXISTS `config_tags_relation` (
    `id` bigint(20) NOT NULL COMMENT 'id',
    `tag_name` varchar(128) NOT NULL COMMENT 'tag_name',
    `tag_type` varchar(64) DEFAULT NULL COMMENT 'tag_type',
    `data_id` varchar(255) NOT NULL COMMENT 'data_id',
    `group_id` varchar(128) NOT NULL COMMENT 'group_id',
    `tenant_id` varchar(128) DEFAULT '' COMMENT 'tenant_id',
    `nid` bigint(20) NOT NULL AUTO_INCREMENT,
    PRIMARY KEY (`nid`),
    UNIQUE KEY `uk_configtagrelation_configidtag` (`id`,`tag_name`,`tag_type`),
    KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='config_tag_relation';


CREATE TABLE IF NOT EXISTS `group_capacity` (
    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
    `group_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Group ID,空字符表示整个集群',
    `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
    `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
    `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
    `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数,,0表示使用默认值',
    `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
    `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
    `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
    `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_group_id` (`group_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='集群、各Group容量信息表';


CREATE TABLE IF NOT EXISTS `his_config_info` (
    `id` bigint(64) unsigned NOT NULL,
    `nid` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
    `data_id` varchar(255) NOT NULL,
    `group_id` varchar(128) NOT NULL,
    `app_name` varchar(128) DEFAULT NULL COMMENT 'app_name',
    `content` longtext NOT NULL,
    `md5` varchar(32) DEFAULT NULL,
    `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
    `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP,
    `src_user` text,
    `src_ip` varchar(50) DEFAULT NULL,
    `op_type` char(10) DEFAULT NULL,
    `tenant_id` varchar(128) DEFAULT '' COMMENT '租户字段',
    PRIMARY KEY (`nid`),
    KEY `idx_gmt_create` (`gmt_create`),
    KEY `idx_gmt_modified` (`gmt_modified`),
    KEY `idx_did` (`data_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='多租户改造';


CREATE TABLE IF NOT EXISTS `tenant_capacity` (
    `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT COMMENT '主键ID',
    `tenant_id` varchar(128) NOT NULL DEFAULT '' COMMENT 'Tenant ID',
    `quota` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '配额,0表示使用默认值',
    `usage` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '使用量',
    `max_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个配置大小上限,单位为字节,0表示使用默认值',
    `max_aggr_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '聚合子配置最大个数',
    `max_aggr_size` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '单个聚合数据的子配置大小上限,单位为字节,0表示使用默认值',
    `max_history_count` int(10) unsigned NOT NULL DEFAULT '0' COMMENT '最大变更历史数量',
    `gmt_create` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
    `gmt_modified` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='租户容量信息表';


CREATE TABLE IF NOT EXISTS `tenant_info` (
    `id` bigint(20) NOT NULL AUTO_INCREMENT COMMENT 'id',
    `kp` varchar(128) NOT NULL COMMENT 'kp',
    `tenant_id` varchar(128) default '' COMMENT 'tenant_id',
    `tenant_name` varchar(128) default '' COMMENT 'tenant_name',
    `tenant_desc` varchar(256) DEFAULT NULL COMMENT 'tenant_desc',
    `create_source` varchar(32) DEFAULT NULL COMMENT 'create_source',
    `gmt_create` bigint(20) NOT NULL COMMENT '创建时间',
    `gmt_modified` bigint(20) NOT NULL COMMENT '修改时间',
    PRIMARY KEY (`id`),
    UNIQUE KEY `uk_tenant_info_kptenantid` (`kp`,`tenant_id`),
    KEY `idx_tenant_id` (`tenant_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin COMMENT='tenant_info';

CREATE TABLE IF NOT EXISTS `users` (
    `username` varchar(50) NOT NULL PRIMARY KEY,
    `password` varchar(500) NOT NULL,
    `enabled` boolean NOT NULL
);

CREATE TABLE IF NOT EXISTS `roles` (
    `username` varchar(50) NOT NULL,
    `role` varchar(50) NOT NULL,
    UNIQUE INDEX `idx_user_role` (`username` ASC, `role` ASC) USING BTREE
);

CREATE TABLE IF NOT EXISTS `permissions` (
    `role` varchar(50) NOT NULL,
    `resource` varchar(255) NOT NULL,
    `action` varchar(8) NOT NULL,
    UNIQUE INDEX `uk_role_permission` (`role`,`resource`,`action`) USING BTREE
);

4. 部署nacos

Kubernetes Nacos | Nacos 官网 这里是官网api

4.1 下载nacos

仅master节点执行

下载nacos项目,部署所需的yml文件在 ./deploy里,我这会把yml文件的内容发出来,不下也行

bash 复制代码
git clone https://github.com/nacos-group/nacos-k8s.git

根据官网的内容,我这选用的是nfs+mysql的部署方式,nfs和mysql在前面都已经搭建好了

4.2 k8s部署nacos

仅master节点执行

bash 复制代码
kubectl create -f nacos-pvc-nfs.yaml

nacos-pvc-nfs.yaml

javascript 复制代码
# 请阅读Wiki文章
# https://github.com/nacos-group/nacos-k8s/wiki/%E4%BD%BF%E7%94%A8peerfinder%E6%89%A9%E5%AE%B9%E6%8F%92%E4%BB%B6
---
apiVersion: v1 # service不用改
kind: Service
metadata:
  name: nacos-headless
  labels:
    app: nacos
spec:
  publishNotReadyAddresses: true 
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的选举端口
    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  clusterIP: None
  selector:
    app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
data:
  mysql.host: "mysql-pro" # 我们上面部署mysql的service的name
  mysql.db.name: "nacos" # 3.2里创建的数据库名、账号密码
  mysql.port: "3306" # mysql的service的端口
  mysql.user: "nacosuser"
  mysql.password: "123456"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
spec:
  podManagementPolicy: Parallel
  serviceName: nacos-headless
  replicas: 3
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      serviceAccountName: nfs-client-provisioner
      initContainers:
        - name: peer-finder-plugin-install
          image: nacos/nacos-peer-finder-plugin:1.1
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - mountPath: /home/nacos/plugins/peer-finder
              name: data
              subPath: peer-finder
      containers:
        - name: nacos
          imagePullPolicy: IfNotPresent
          image: nacos/nacos-server:2.2.3
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client-port
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SERVICE_NAME
              value: "nacos-headless"
            - name: DOMAIN_NAME
              value: "cluster.local"
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.host
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
          volumeMounts:
            - name: data
              mountPath: /home/nacos/plugins/peer-finder
              subPath: peer-finder
            - name: data
              mountPath: /home/nacos/data
              subPath: data
            - name: data
              mountPath: /home/nacos/logs
              subPath: logs
  volumeClaimTemplates:
    - metadata:
        name: data
        annotations:
          volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" # 创建nfs-client-provisioner时其中有一步创建的class的名称,nfs-client-provisioner会自动创建pvc上面映射的3个目录都会自动创建
      spec:
        accessModes: [ "ReadWriteMany" ]
        resources:
          requests:
            storage: 20Gi
  selector:
    matchLabels:
      app: nacos

运行完后虽然我们replicas设置了3个,但是我们只有一个节点,nacos的官方默认配置是需要0.5个cpu核心2G内存,我的虚拟机资源没配那么多,所以实际只会创建1个nacos pod,其他为pending状态

4.3 通过nginx代理nacos控制台

仅master节点执行

nginx我打算用deployment创建,先在nfs共享目录下创建需要用到的目录,我这是/data/nfs你们进到你们的nfs目录下

bash 复制代码
mkdir nginx
mkdir nginx/conf
mkdir nginx/html

在nginx/conf里先准备好nginx的配置文件nginx.conf,mime.type随便下个nginx都有,你们自己拷下,文章已经够长了,没用的尽量不发,随便下个nginx把mine.type也拷到nginx/conf中

nginx.conf

javascript 复制代码
worker_processes  1;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;

    keepalive_timeout  65;

    #gzip  on;

    server {
        listen       80;
        server_name  localhost;

         location /nacos {
            proxy_pass   http://nacos-headless:8848; # nacos service的名字
        }

        location / {
            root   html;
            index  index.html index.htm; # 预留部署前端项目用,容器中的html文件夹我会映射到nfs上的nginx/html,前端项目可以直接放在这个文件夹中
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

    }


}

准备好后,就可以部署k8s的nginx了,用deployment的方式部署,使用nodePort service对外开放服务

bash 复制代码
kubectl create -f kube-nginx.yaml

kube-nginx.yaml

javascript 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.20.2 # 使用nginx镜像的1.20.2版本
          ports:
            - containerPort: 80 # 容器开放80接口
          volumeMounts:
            - name: web-nginx-config
              mountPath: /etc/nginx
            - name: web-nginx-front
              mountPath: /usr/share/nginx/html
      volumes:
        - name: web-nginx-config
          nfs: 
            server: 192.168.233.20
            path: /nginx/conf
        - name: web-nginx-front
          nfs: 
            server: 192.168.233.20
            path: /nginx/html

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - port: 80 #service提供的虚拟端口,用于映射pod中容器的真实端口
      nodePort: 30001 # 集群对外开放的接口
      protocol: TCP
      name: nginx-in #端口的名称,用于释义
      targetPort: 80 #容器对外暴露的端口,如果没有指定targetPort,则默认targetPort与port相同
  type: NodePort # 外部访问的负载均衡器方式

nginx部署成功后,外部就能通过ip和端口访问到nacos的前端了

bash 复制代码
http://192.168.233.20:30001/nacos/index.html

bash 复制代码
http://192.168.233.21:30001/nacos/index.html

5. 部署seata

5.1 创建seata数据库

仅master节点执行

通过3.2的方法创建seata数据库、seata账号并授权,下面5.2里的configMap里数据库名还有账号密码记得改

弄好后,用navicat执行sql语句,seata语句库执行下面的语句

sql 复制代码
-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
    `xid`                       VARCHAR(128) NOT NULL,
    `transaction_id`            BIGINT,
    `status`                    TINYINT      NOT NULL,
    `application_id`            VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name`          VARCHAR(128),
    `timeout`                   INT,
    `begin_time`                BIGINT,
    `application_data`          VARCHAR(2000),
    `gmt_create`                DATETIME,
    `gmt_modified`              DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),
    KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
    `branch_id`         BIGINT       NOT NULL,
    `xid`               VARCHAR(128) NOT NULL,
    `transaction_id`    BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id`       VARCHAR(256),
    `branch_type`       VARCHAR(8),
    `status`            TINYINT,
    `client_id`         VARCHAR(64),
    `application_data`  VARCHAR(2000),
    `gmt_create`        DATETIME(6),
    `gmt_modified`      DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
    `row_key`        VARCHAR(128) NOT NULL,
    `xid`            VARCHAR(128),
    `transaction_id` BIGINT,
    `branch_id`      BIGINT       NOT NULL,
    `resource_id`    VARCHAR(256),
    `table_name`     VARCHAR(32),
    `pk`             VARCHAR(36),
    `status`         TINYINT      NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
    `gmt_create`     DATETIME,
    `gmt_modified`   DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_status` (`status`),
    KEY `idx_branch_id` (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

CREATE TABLE IF NOT EXISTS `distributed_lock`
(
    `lock_key`       CHAR(20) NOT NULL,
    `lock_value`     VARCHAR(20) NOT NULL,
    `expire`         BIGINT,
    primary key (`lock_key`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);

连接seata的spring cloud服务,执行以下语句

sql 复制代码
CREATE TABLE `undo_log` (
  `branch_id` bigint NOT NULL COMMENT 'branch transaction id',
  `xid` varchar(128) NOT NULL COMMENT 'global transaction id',
  `context` varchar(128) NOT NULL COMMENT 'undo_log context,such as serialization',
  `rollback_info` longblob NOT NULL COMMENT 'rollback info',
  `log_status` int NOT NULL COMMENT '0:normal status,1:defense status',
  `log_created` datetime(6) NOT NULL COMMENT 'create datetime',
  `log_modified` datetime(6) NOT NULL COMMENT 'modify datetime',
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`),
  KEY `ix_log_created` (`log_created`)
) ENGINE=InnoDB COMMENT='AT transaction mode undo table';

5.2 k8s部署seata

仅master节点执行

我这seata用at模式部署,注册中心、配置中心用nacos,存储方式用的mysql

bash 复制代码
kubectl create -f kube-seata.yml

kube-seata.yml

html 复制代码
apiVersion: v1
kind: ConfigMap
metadata:
  name: seata-server-config
  namespace: default
data:
  application.yml: |
  
    server:
      port: 7091
    spring:
      application:
        name: seata-server

    logging:
      config: classpath:logback-spring.xml
      file:
        path: ${user.home}/logs/seata
      extend:
        logstash-appender:
          destination: 127.0.0.1:4560
        kafka-appender:
          bootstrap-servers: 127.0.0.1:9092
          topic: logback_to_logstash

    console:
      user:
        username: seata
        password: seata

    seata:
      config:
        # support: nacos, consul, apollo, zk, etcd3
        type: nacos
        nacos:
          server-addr: nacos-headless:8848 # nacos service的名字
          namespace:
          group: SEATA_GROUP
          username:
          password:
          context-path:
      registry:
        # support: nacos, eureka, redis, zk, consul, etcd3, sofa
        type: nacos
        nacos:
          application: seata-server
          server-addr: nacos-headless:8848
          group: SEATA_GROUP
          namespace:
          cluster: default
          username:
          password:
          context-path:
      store:
        # support: file 、 db 、 redis
        mode: db
        db:
          datasource: druid
          db-type: mysql
          driver-class-name: com.mysql.cj.jdbc.Driver
          url: jdbc:mysql://mysql-pro:3306/seata?rewriteBatchedStatements=true  # mysql service的名字
          user: demo2
          password: 123456
          min-conn: 10
          max-conn: 100
          global-table: global_table
          branch-table: branch_table
          lock-table: lock_table
          distributed-lock-table: distributed_lock
          query-limit: 1000
          max-wait: 5000
    #  server:
    #    service-port: 8091 #If not configured, the default is '${server.port} + 1000'
      security:
        secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
        tokenValidityInMilliseconds: 1800000
        ignore:
          urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: seata-server
  namespace: default
  labels:
    k8s-app: seata-server
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: seata-server
  template:
    metadata:
      labels:
        k8s-app: seata-server
    spec:
      containers:
        - name: seata-server
          image: docker.io/seataio/seata-server:1.6.1
          imagePullPolicy: IfNotPresent
          ports:
            - name: http-7091
              containerPort: 7091
              protocol: TCP
            - name: http-8091
              containerPort: 8091
              protocol: TCP
          volumeMounts:
            - name: seata-config
              mountPath: /seata-server/resources/application.yml
              subPath: application.yml
      volumes:
        - name: seata-config
          configMap:
            name: seata-server-config
---
apiVersion: v1
kind: Service
metadata:
  name: seata-server
  namespace: default
  labels:
    k8s-app: seata-server
spec:
  ports:
    - port: 8091
      protocol: TCP
      name: seata-8091
    - port: 7091
      protocol: TCP
      name: seata-7091
  selector:
    k8s-app: seata-server

6. 部署java项目

我这只举个例子,你们可以按照自己的需求修改

6.1 制作cloud alibaba的docker容器

仅node节点执行

这句命令不懂的话,可以去看我上篇讲docker的文章

centos8安装docker运行java文件_centos8 docker安装java8-CSDN博客

把cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar和ali-gateway.dockerfile放在同个目录,然后执行下面的语句

bash 复制代码
docker build -f ali-gateway.dockerfile -t ali-cloud-gateway:latest .

ali-gateway.dockerfile

javascript 复制代码
FROM openjdk:8
RUN mkdir /usr/local/project
ADD cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar  /usr/local/project/cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar
EXPOSE 8000
ENTRYPOINT ["java","-jar","/usr/local/project/cloud-alibaba-gateway8000-1.0-SNAPSHOT.jar"]

运行后查看镜像是否创建成功了

6.2 部署ali-cloud-gateway

仅master节点执行

gateway是spring cloud服务的入口,所以要用service暴露端口访问,如果部署其他项目,可以把service的部分删掉,我这省事用Deployment,正常Deployment比较适用于单项目的部署,cloud项目比较适合用statefulset部署,方便日志文件的收集,statefulset可以参照4.2部署nacos的方式修改

bash 复制代码
kubectl create -f ali-cloud-gateway.yml

ali-cloud-gateway.yml

javascript 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ali-cloud-gateway-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ali-cloud-gateway
  template:
    metadata:
      labels:
        app: ali-cloud-gateway
    spec:
      containers:
        - name: ali-cloud-gateway
          image: ali-cloud-gateway:latest
          ports:
            - containerPort: 8000
 
---
apiVersion: v1
kind: Service
metadata:
  name: ali-cloud-gateway-svc
spec:
  selector:
    app: ali-cloud-gateway
  ports:
    - port: 8000 #service提供的虚拟端口,用于映射pod中容器的真实端口
      protocol: TCP
      name: ali-cloud-gateway-svc #端口的名称,用于释义
      targetPort: 8000 #容器对外暴露的端口,如果没有指定targetPort,则默认targetPort与port相同
      nodePort: 30005
  type: NodePort

通过gateway暴露的端口进行访问,成功访问到项目

6.3 部署sentinel

仅node节点执行

sentinel也就一个jar,部署方式和6.2类似,稍微提一下

bash 复制代码
docker build -f sentinel.dockerfile -t sentinel:1.8.6 .

sentinel.dockerfile

javascript 复制代码
FROM openjdk:8
RUN mkdir /usr/local/project
ADD sentinel-dashboard-1.8.6.jar  /usr/local/project/sentinel-dashboard-1.8.6.jar
EXPOSE 8080
EXPOSE 8719
ENTRYPOINT ["java","-Dserver.port=8080","-Dcsp.sentinel.dashboard.server=localhost:8080","-Dproject.name=sentinel-dashboard","-jar","/usr/local/project/sentinel-dashboard-1.8.6.jar"]

仅master节点执行

开放sentinel前端访问的话,可以参考4.3nginx代理nacos的方法

bash 复制代码
kubectl create -f kube-sentinel.yml

kube-sentinel.yml

javascript 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  name: sentinel-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sentinel
  template:
    metadata:
      labels:
        app: sentinel
    spec:
      containers:
        - name: sentinel
          image: sentinel:1.8.6
          ports:
            - containerPort: 8080
            - containerPort: 8719
 
---
apiVersion: v1
kind: Service
metadata:
  name: sentinel-front
spec:
  selector:
    app: sentinel
  ports:
    - port: 8080 #service提供的虚拟端口,用于映射pod中容器的真实端口
      protocol: TCP
      name: sentinel-front #端口的名称,用于释义
      targetPort: 8080 #容器对外暴露的端口,如果没有指定targetPort,则默认targetPort与port相同
    - port: 8719
      protocol: TCP
      name: sentinel-service

至此部署整个项目所需的知识都在这了,其中有些重复的步骤,我有标明在哪个地方有,可以尝试着自己做做,巩固下知识

相关推荐
问道飞鱼25 分钟前
kubenets基础-kubectl常用命令行
kubernetes
Lill_bin11 小时前
Ribbon简介
分布式·后端·spring cloud·微服务·云原生·ribbon
吴半杯12 小时前
Docker安装SVN,搭建自己的本地版本仓库
docker·svn·容器
杨荧13 小时前
【JAVA开源】基于Vue和SpringBoot的在线旅游网站
java·vue.js·spring boot·spring cloud·开源
俗庸20316 小时前
k8s中的认证授权
云原生·容器·kubernetes
HoweWWW16 小时前
k8s介绍及部署
docker·容器·kubernetes
俗庸20316 小时前
kubernetes技术详解,带你深入了解k8s
云原生·容器·kubernetes
躺平的花卷17 小时前
部署k8s基础环境
docker·容器·自动化
是芽芽哩!18 小时前
【Kubernetes】常见面试题汇总(十九)
云原生·容器·kubernetes
无限航线18 小时前
k8s中的lables和matchlables的作用
云原生·容器·kubernetes