微服务
微服务的概念最早是在2014年由Martin Fowler和James Lewis共同提出,他们定义了微服务是由单⼀应用程序构成的小服务,拥有自己的进程与轻量化处理,服务依业务功能设计,以全自动的方式 部署,与其他服务使⽤HTTP API通讯。同时,服务会使⽤最小规模的集中管理 (例如Docker)技术,服务可以用不同的编程语言与数据库等。
单体应用
在开聊微服务之前,我先要你和介绍下单体应用。如果你不知道单体应用的痛,那也不会深刻理解微服务的价值。
早些年,各大互联网公司的应用技术栈大致可分为 LAMP(Linux + Apache + MySQL + PHP)和 MVC(Spring + iBatis/Hibernate + Tomcat)两大流派。无论是LAMP还是MVC,都是为单体应用架构设计的,其优点是学习成本低,开发上手快,测试、部署、运维也比较方便,甚⾄一个人就可以完成⼀个网站的开发与部署。
以MVC架构为例,业务通常是通过部署⼀个WAR包到Tomcat中,然后启动Tomcat,监听某个端口即可对外提供服务。早期在 业务规模不大、开发团队⼈员规模较小的时候,采⽤单体应⽤架构,团队的开发和运维成本都可控。
然而随着业务规模的不断扩大,团队开发人员的不断扩张,单体应用架构会开始出现问题。
(1)所有功能耦合在一起,互相影响,最终难以管理。
(2)哪怕只修改一行代码,整个软件就要重新构建和部署,成本非常高。
(3)因为软件做成了一个整体,不可能每个功能单独开发和测试,只能整体开发和测试,导致必须采用瀑布式开发模型。
![](https://img-home.csdnimg.cn/images/20230724024159.png)
面向服务架构
为了解决上面这些问题,很早就有人提出来,必须打破代码的耦合,拆分单体架构,将软件拆分成一个个独立的功能单元。
大概在20多年前,随着互联网的出现,功能单元可以用远程"服务"的形式提供,就诞生出了"面向服务架构"(service-oriented architecture,简称 SOA)。
![](https://img-home.csdnimg.cn/images/20230724024159.png)
所谓服务(service),就是在后台不间断运行、提供某种功能的一个程序。最常见的服务就是 Web 服务,通过80端口向外界提供网页访问。
"面向服务架构" 就是把一个大型的单体程序,拆分成一个个独立服务,也就是较小的程序。每个服务都是一个独立的功能单元,承担不同的功能,服务之间通过通信协议连在一起。
这种架构有很多优点。
(1)每种服务功能单一,相当于一个小型软件,便于开发和测试。
(2)各个服务独立运行,简化了架构,提高了可靠性。
(3)鼓励和支持代码重用,同一个服务可以用于多种目的。
(4)不同服务可以单独开发和部署,便于升级。
(5)扩展性好,可以容易地加机器、加功能,承受高负载。
(6)不容易出现单点故障。即使一个服务失败了,不会影响到其他服务。
跟单体架构不一样,面向服务架构是语言不敏感的,不同服务可以使用不同的语言和工具开发,可能需要部署在不同的系统和环境。
这意味着,面向服务架构****默认运行在不同服务器上,每台服务器提供一种服务,多台服务器共同组成一个完整的网络应用。
什么是微服务?
2014年,Docker 出现了,彻底改变了软件开发的面貌。它让程序运行在容器中,每个容器可以分别设定运行环境,并且只占用很少的系统资源。
![](https://img-home.csdnimg.cn/images/20230724024159.png)
显而易见,可以用容器来实现"面向服务架构",每个服务不再占用一台服务器,而是占用一个容器。
这样就不需要多台服务器了,最简单的情况下,本机运行多个容器,只用一台服务器就实现了面向服务架构,这在以前是做不到的。这种实现方式就叫做微服务。
微服务特点:
- 服务拆分粒度更细。微服务可以说是更细维度的服务化,小到⼀个子模块,只要该模块依赖的资源与其他模块都没有关系,那么就可以拆分为⼀个微服务。
- 服务独立部署。每个微服务都严格遵循独⽴打包部署的准则,互不影响。比如⼀台物理机上可以部署多个Docker 实例,每个Docker 实例可以部署⼀个微服务的代码。
- 服务独立维护。每个微服务都可以交由⼀个小团队甚⾄个人来开发、测试、发布和运维,并对整个生命周期负责。
- 服务治理能力要求高。因为拆分为微服务之后,服务的数量变多,因此需要有统⼀的服务治理平台,来对各个服务进⾏管理。
微服务架构
![](https://img-home.csdnimg.cn/images/20230724024159.png)
帐户服务
帐户服务提供与顾客帐户相关的信息,例如地址和付款信息。
库存服务
提供最新库存信息,方便顾客购买商品。
购物车服务
顾客使用该服务从库存中挑选他们想购买的商品。
付款服务
顾客为购物车中的商品付款。
配送服务
此服务负责安排所购商品的包装与发货。
应用通过各个微服务发布的 REST API 与微服务进行交互。API 网关允许应用依赖微服务所提供的 API,也允许将这些微服务交换为使用相同 API 的其他微服务。
每项微服务均由服务和数据库构成。这些服务负责处理 REST API、实现业务逻辑,并将数据存储在数据库中。
接口文档:
Go
/api/v1/account
/api/v1/inventory
/api/v1/cart
/api/v1/pay
/api/v1/ship
微服务部署
以 若依 项目为例讲解 基于 Spring Boot 的微服务项目的编译、打包、部署。
若依是基于 Vue/Element UI 和 Spring Boot/Spring Cloud & Alibaba 前后端分离的分布式微服务架构。
部署架构
![](https://img-home.csdnimg.cn/images/20230724024159.png)
工具篇
Bash
JDK >= 1.8 (推荐1.8版本)
Maven >= 3.0
Node >= 12
Mysql >= 5.7.0 (推荐5.7版本)
Redis >= 3.0
nacos >= 2.0.4 (ruoyi-cloud < 3.0 需要下载nacos >= 1.4.x版本)
Git
Bash
yum install git
git clone https://gitee.com/ss190720173/ruo-yi-cloud.git #个人的git仓库
JDK java development kit JRE Runtime Env
Bash
tar xvf jdk-8u291-linux-x64.tar.gz -C /usr/local/
ln -s /usr/local/jdk1.8.0_291 /usr/local/jdk1.8
echo "export JAVA_HOME=/usr/local/jdk1.8" >> /etc/profile
echo 'export PATH=$JAVA_HOME/bin:$PATH' >> /etc/profile
source /etc/profile
[root@node-01 tools-package]# java -version
java version "1.8.0_291"
Java(TM) SE Runtime Environment (build 1.8.0_291-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.291-b10, mixed mode)
Maven 编译 java 代码 .jar .war
Bash
wget https://archive.apache.org/dist/maven/maven-3/3.5.4/binaries/apache-maven-3.5.4-bin.tar.gz
tar xvf apache-maven-3.5.4-bin.tar.gz -C /usr/local/
ln -s /usr/local/apache-maven-3.5.4 /usr/local/maven
echo "export MAVEN_HOME=/usr/local/maven" >> /etc/profile
echo 'export PATH=$MAVEN_HOME/bin:$PATH' >> /etc/profile
source /etc/profile
[root@node-01 tools-package]# mvn --version
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-18T02:33:14+08:00)
Maven home: /usr/local/maven
Java version: 1.8.0_291, vendor: Oracle Corporation, runtime: /usr/local/jdk1.8.0_291/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1160.92.1.el7.x86_64", arch: "amd64", family: "unix"
NODE nodejs npm pnpm yarn
Bash
wget https://nodejs.org/download/release/v16.20.2/node-v16.20.2-linux-x64.tar.gz
tar xvf node-v16.20.2-linux-x64.tar.gz -C /usr/local/
ln -s /usr/local/node-v16.20.2-linux-x64 /usr/local/node
echo "export NODE_HOME=/usr/local/node" >> /etc/profile
echo 'export PATH=$NODE_HOME/bin/:$PATH' >> /etc/profile
source /etc/profile
[root@node-01 tools-package]# node --version
v16.20.2
[root@node-01 tools-package]# npm --version
8.19.4
shell
前面的这些软件都保存到 micro-service-deploy.tar 中 可只取
编译打包
克隆代码
Bash
https://gitee.com/ss190720173/ruo-yi-cloud.git #个人的git仓库
后端编译
Bash
cd RuoYi-Cloud
mvn clean install
前端编译
Bash
cd RuoYi-Cloud/ruoyi-ui/
npm install --registry=https://registry.npmmirror.com
# npm install --registry=https://registry.npm.taobao.org (使用淘宝仓库)
npm run build:prod #打包生产环境
npm run build:dev
npm run build:test
制作镜像
后端镜像
后端基础应用模块
RuoYiGatewayApplication
(网关模块 必须)RuoYiAuthApplication
(认证模块 必须)RuoYiSystemApplication
(系统模块 必须)
Bash
[root@node-01 RuoYi-Cloud]# ll ruoyi-gateway/target/ruoyi-gateway.jar
-rw-r--r-- 1 root root 88623811 Dec 7 11:09 ruoyi-gateway/target/ruoyi-gateway.jar
[root@node-01 RuoYi-Cloud]# ll ruoyi-auth/target/ruoyi-auth.jar
-rw-r--r-- 1 root root 80499767 Dec 7 11:09 ruoyi-auth/target/ruoyi-auth.jar
[root@node-01 RuoYi-Cloud]# ll ruoyi-modules/ruoyi-system/target/ruoyi-modules-system.jar
-rw-r--r-- 1 root root 96844598 Dec 7 11:09 ruoyi-modules/ruoyi-system/target/ruoyi-modules-system.jar
后端 base image dockerfile
Dockerfile
# Base image
FROM centos:7
# Maintainer
LABEL maintainer chijinjing<chijinjing@xinxianghf.com>
RUN mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup #替换阿里源
RUN curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
RUN sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
RUN curl -o /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo
# Install tools
RUN yum -y install telnet lrzsz && yum clean all
ENV JAVA_VERSION=1.8.0_291
COPY jre${JAVA_VERSION} /usr/local/jre${JAVA_VERSION}
ENV JAVA_HOME=/usr/local/jre${JAVA_VERSION}
ENV PATH=${JAVA_HOME}/bin:$PATH
CMD ["/bin/bash"]
[root@node-01 centos-jre1.8]# docker build -t centos-jre:1.8 .
[root@node-01 centos-jre1.8]# docker images centos-jre:1.8
REPOSITORY TAG IMAGE ID CREATED SIZE
centos-jre 1.8 7513b16211b1 13 seconds ago 481MB
应用 Dockerfile
ruoyi-gateway
Dockerfile
cd RuoYi-Cloud/docker/ruoyi/gateway/
#因为有系统问题请复制修改完后 cd /root
#写dockerfile 若有请替换
vim dockerfile
FROM centos-jre:1.8
LABEL maintainer chijinjing<chijinjing@xinxianghf.com>
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo 'Asia/Shanghai' >/etc/timezone
ENV jar ruoyi-gateway.jar
ENV workdir /data/app/
COPY ${jar} ${workdir}
WORKDIR ${workdir}
ENTRYPOINT ["sh", "-ec", "exec java ${JAVA_OPTS} -jar ${jar} ${PARAMS} "]
ruoyi-auth
Dockerfile
cd RuoYi-Cloud/docker/ruoyi/gateway/
#因为有系统问题请复制修改完后 cd /root
#写dockerfile 若有请替换
vim dockerfile
FROM centos-jre:1.8
LABEL maintainer chijinjing<chijinjing@xinxianghf.com>
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo 'Asia/Shanghai' >/etc/timezone
ENV jar ruoyi-auth.jar
ENV workdir /data/app/
COPY ${jar} ${workdir}
WORKDIR ${workdir}
ENTRYPOINT ["sh", "-ec", "exec java ${JAVA_OPTS} -jar ${jar} ${PARAMS} "]
ruoyi-modules-system
Dockerfile
cd RuoYi-Cloud/docker/ruoyi/modules/system
#因为有系统问题请复制修改完后 cd /root
#写dockerfile 若有请替换
vim dockerfile
FROM centos-jre:1.8
LABEL maintainer chijinjing<chijinjing@xinxianghf.com>
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
RUN echo 'Asia/Shanghai' >/etc/timezone
ENV jar ruoyi-modules-system.jar
ENV workdir /data/app/
COPY ${jar} ${workdir}
WORKDIR ${workdir}
ENTRYPOINT ["sh", "-ec", "exec java ${JAVA_OPTS} -jar ${jar} ${PARAMS} "]
复制 构建产物 到 docker 目录
Bash
#因为有系统问题请复制修改完后 cd /root
cp RuoYi-Cloud/ruoyi-gateway/target/ruoyi-gateway.jar RuoYi-Cloud/docker/ruoyi/gateway/
#因为有系统问题请复制修改完后 cd /root
cp RuoYi-Cloud/ruoyi-auth/target/ruoyi-auth.jar RuoYi-Cloud/docker/ruoyi/auth/
#因为有系统问题请复制修改完后 cd /root
cp RuoYi-Cloud/ruoyi-modules/ruoyi-system/target/ruoyi-modules-system.jar RuoYi-Cloud/docker/ruoyi/modules/system
制作 应用镜像
Bash
cd RuoYi-Cloud/docker/ruoyi/gateway
docker build -t ruoyi-gateway:1.0.1 .
#因为有系统问题请复制修改完后 cd /root
cd RuoYi-Cloud/docker/ruoyi/auth
docker build -t ruoyi-auth:1.0.1 .
#因为有系统问题请复制修改完后 cd /root
cd RuoYi-Cloud/docker/ruoyi/modules/system
docker build -t ruoyi-modules-system:1.0.1 .
查看应用镜像
Dockerfile
[root@jenkins system]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ruoyi-modules-system 1.0.1 b5dbf08a49ab 6 seconds ago 577MB
ruoyi-auth 1.0.1 4434bf72b713 2 minutes ago 561MB
ruoyi-gateway 1.0.1 b729e620a4a6 3 minutes ago 569MB
前端镜像制作
Bash
# 前端打包代码复制
cd /root
#复制打包好的代码
cp -rf RuoYi-Cloud/ruoyi-ui/dist/ RuoYi-Cloud/docker/nginx/dist
Dockerfile
Bash
cd RuoYi-Cloud/docker/nginx/
#因为有系统问题请复制修改完后 cd /root
#写dockerfile 若有请替换
vim dockerfile
FROM nginx:1.22.1-alpine
COPY /dist /usr/local/web/
COPY nginx.conf /etc/nginx/nginx.conf
CMD ["nginx", "-g", "daemon off;"]
cd RuoYi-Cloud/docker/nginx/
docker build -t ruoyi-ui:1.0.1 .
基础环境和中间件
部署 Kubernetes 集群
使用 KubeKey 安装
下载KK
Bash
# 下载 kubekey
mkdir ~/kubekey
cd ~/kubekey/
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | sh -
#查看k8s版本
./kk version --show-supported-k8s
# 生成配置文件
./kk create config --name opsxlab -f ksp-v341-v1288.yaml --with-kubernetes v1.23.15 --with-kubesphere v3.4.1
#两台服务器端都安装
yum -y install conntrack socat
编辑配置文件
Bash
vim ksp-v341-v1288.yaml
#编辑内容如下
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 172.17.0.114, user: root, password: "sR00tgemUKB"}
- {name: node2, address: 172.17.0.115, user: root, password: "sRgemUKB"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
- node2
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
internalLoadbalancer: haproxy #去掉这个注释开启代理
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.23.15
clusterName: cluster.local
autoRenewCerts: true
containerManager: containerd
registry:
privateRegistry: "registry.cn-beijing.aliyuncs.com" # 使用阿里云镜像
namespaceOverride: "kubesphereio" # 阿里云镜像 KubeSphere 官方 namespace
registryMirrors: []
insecureRegistries: []
#默认没有的新增配置 可加可不加 用来识别pv
storage:
openebs:
basePath: /data/openebs/local
安装
Bash
export KKZONE=cn
./kk create cluster -f ksp-v341-v1288.yaml
验证
Bash
kubectl get pods -A
部署中间件
安装helm
shell
cd /root
wget https://get.helm.sh/helm-v3.12.3-linux-amd64.tar.gz
tar -xvzf helm-v3.12.3-linux-amd64.tar.gz
mv linux-amd64/helm /usr/bin/
搭建nfs-provision
shell
#在192.168.90.80服务器上搭建nfs服务器
yum install -y nfs-utils rpcbind
mkdir -p /data/nfs
echo "/data/nfs 192.168.90.0/24(rw,sync,no_subtree_check,no_root_squash,insecure)" >> /etc/exports
systemctl restart nfs
#回到k8s在worker节点
yum install -y nfs-utils
cd nfs-provisioner/
#启动provision
kubectl apply -f 01-rbac.yaml -f 02-deployment.yaml -f nfs-client-retained-sc.yaml
#部署完mysql和redis查看pv
kubectl get pv
MySQL
使用 helm 部署 mysql 实例
Bash
cd /root
helm repo add bitnami https://charts.bitnami.com/bitnami
helm pull bitnami/mysql --version=9.14.4
修改 values
vim mysql-values.yaml
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: "nfs-client-retained"
image:
registry: docker.io
repository: bitnami/mysql
tag: 8.0.35-debian-11-r0
digest: ""
## @param architecture MySQL architecture (`standalone` or `replication`)
##
architecture: standalone
## MySQL Authentication parameters
##
auth:
## @param auth.rootPassword Password for the `root` user. Ignored if existing secret is provided
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mysql#setting-the-root-password-on-first-run
##
rootPassword: "eSV3nliDVirLyR[esybiH93pYLH"
## @param auth.createDatabase Whether to create the .Values.auth.database or not
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mysql#creating-a-database-on-first-run
##
createDatabase: false
## @param auth.database Name for a custom database to create
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mysql#creating-a-database-on-first-run
##
database: "my_database"
## @param auth.username Name for a custom user to create
## ref: https://github.com/bitnami/containers/tree/main/bitnami/mysql#creating-a-database-user-on-first-run
##
username: ""
## @param auth.password Password for the new user. Ignored if existing secret is provided
##
password: ""
部署
YAML
#部署mysql
cd /root/micro-service-deploy/helm-package/
#部署命令
helm upgrade --install mysql -f mysql/mysql-values.yaml ./mysql/ --create-namespace --namespace middleware
[root@jenkins helm-package]# helm upgrade --install mysql -f mysql/mysql-values.yaml ./mysql/ --create-namespace --namespace middleware
Release "mysql" does not exist. Installing it now.
NAME: mysql
LAST DEPLOYED: Thu Dec 7 17:29:44 2023
NAMESPACE: middleware
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: mysql
CHART VERSION: 9.14.4
APP VERSION: 8.0.35
** Please be patient while the chart is being deployed **
Tip:
Watch the deployment status using the command: kubectl get pods -w --namespace middleware
Services:
echo Primary: mysql.middleware.svc.cluster.local:3306
Execute the following to get the administrator credentials:
echo Username: root
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace middleware mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d) #查看root的密码
To connect to your database:
1. Run a pod that you can use as a client:
#部署mysql的客户端
kubectl run mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.35-debian-11-r0 --namespace middleware --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
2. To connect to primary service (read/write):
mysql -h mysql.middleware.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
Redis
YAML
cd /root
helm pull bitnami/redis --version=16.13.2
tar xvf redis-16.13.2.tgz
修改 values
vim redis-values.yaml
global:
imageRegistry: ""
## E.g.
## imagePullSecrets:
## - myRegistryKeySecretName
##
imagePullSecrets: []
storageClass: "nfs-client-retained"
redis:
password: "iR+YlGOReK0H0duqGywmivum$is"
image:
registry: docker.io
repository: bitnami/redis
tag: 6.2.7-debian-11-r11
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## @param architecture Redis® architecture. Allowed values: `standalone` or `replication`
##
architecture: standalone
部署
Bash
#部署redis
cd /root/micro-service-deploy/helm-package/
#部署命令
helm upgrade --install redis -f redis/redis-values.yaml ./redis/ --create-namespace --namespace middleware
[root@jenkins helm-package]# helm upgrade --install redis -f redis/redis-values.yaml ./redis/ --create-namespace --namespace middleware
Release "redis" does not exist. Installing it now.
NAME: redis
LAST DEPLOYED: Thu Dec 7 17:41:05 2023
NAMESPACE: middleware
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis
CHART VERSION: 16.13.2
APP VERSION: 6.2.7
** Please be patient while the chart is being deployed **
Redis® can be accessed via port 6379 on the following DNS name from within your cluster:
redis-master.middleware.svc.cluster.local
To get your password run:
#查看root的密码
export REDIS_PASSWORD=$(kubectl get secret --namespace middleware redis -o jsonpath="{.data.redis-password}" | base64 -d)
To connect to your Redis® server:
1. Run a Redis® pod that you can use as a client:
kubectl run --namespace middleware redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.7-debian-11-r11 --command -- sleep infinity
Use the following command to attach to the pod:
kubectl exec --tty -i redis-client \
--namespace middleware -- bash
2. Connect using the Redis® CLI:
REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-master
To connect to your database from outside the cluster execute the following commands:
kubectl port-forward --namespace middleware svc/redis-master 6379:6379 &
REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p 6379
Nacos
Nacos 致力于帮助您发现、配置和管理微服务。Nacos 提供了一组简单易用的特性集,帮助您快速实现动态 服务发现 、服务配置、服务元数据及流量管理。
![](https://img-home.csdnimg.cn/images/20230724024159.png)
- 建库:
Bash
# 建库
CREATE DATABASE `db_nacos` CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
CREATE USER 'nacos'@'%' IDENTIFIED BY 'oweBohC0wzePaNoGUDqYzel#ond';
GRANT ALL PRIVILEGES ON `db_nacos`.* TO 'nacos'@'%';
FLUSH PRIVILEGES;
- 导入表结构
- 部署:
Bash
git clone https://github.com/nacos-group/nacos-k8s.git
修改values
YAML
# Default values for nacos.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
mode: standalone
# mode: cluster
############################nacos###########################
namespace: middleware
nacos:
image:
repository: nacos/nacos-server
tag: v2.2.3
pullPolicy: IfNotPresent
plugin:
enable: true
image:
repository: nacos/nacos-peer-finder-plugin
tag: 1.1
pullPolicy: IfNotPresent
replicaCount: 1
podManagementPolicy: Parallel
domainName: cluster.local
preferhostmode: hostname
serverPort: 8848
health:
enabled: false
storage:
# type: embedded
type: mysql
db:
host: mysql.middleware.svc.cluster.local
name: ry-cloud
port: 3306
username: ruoyi
password: oweBohC0wzePaNoGUDqYzel#ond
param: characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=false
persistence:
enabled: enable
data:
accessModes:
- ReadWriteOnce
storageClassName: nfs-client-retained
resources:
requests:
storage: 5Gi
service:
#type: ClusterIP
type: NodePort
port: 8848
nodePort: 30000
ingress:
enabled: false
# apiVersion: extensions/v1beta1
apiVersion: networking.k8s.io/v1
annotations: { }
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
# For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName
# See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress
# ingressClassName: nginx
ingressClassName: "nginx"
hosts:
- host: nacos.example.com
#paths: [ ]
tls: [ ]
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
annotations: { }
nodeSelector: { }
tolerations: [ ]
affinity: { }
部署
Bash
helm upgrade --install nacos -f helm/orig-values.yaml ./helm/ --create-namespace --namespace middleware
[root@jenkins nacos-k8s]# helm upgrade --install nacos -f helm/values.yaml ./helm/ --create-namespace --namespace middleware
Release "nacos" does not exist. Installing it now.
NAME: nacos
LAST DEPLOYED: Thu Dec 7 18:22:45 2023
NAMESPACE: middleware
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace middleware -o jsonpath="{.spec.ports[0].nodePort}" services nacos-cs)
export NODE_IP=$(kubectl get nodes --namespace middleware -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT/nacos
2. MODE:
standalone: you need to modify replicaCount in the values.yaml, .Values.replicaCount=1
cluster: kubectl scale sts middleware-nacos --replicas=3
[root@node-01 helm]# kubectl -n middleware get svc nacos-cs
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nacos-cs NodePort 10.104.92.105 <none> 8848:30044/TCP,9848:32241/TCP,9849:32448/TCP,7848:30000/TCP 3h22m
访问 8848 端口对应的 NodePort
![](https://img-home.csdnimg.cn/images/20230724024159.png)
连接 MySQL 和 Redis Pod
Bash
# Redis
export REDIS_PASSWORD=$(kubectl get secret --namespace middleware redis -o jsonpath="{.data.redis-password}" | base64 -d)
kubectl run --namespace middleware redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.7-debian-11-r11 --command -- sleep infinity
kubectl exec --tty -i redis-client \
--namespace middleware -- bash
redis-cli -h redis-master.middleware.svc.cluster.local -a 'iR+YlGOReK0H0duqGywmivum$is'
# MySQL
MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace middleware mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)
kubectl run mysql-client --rm --tty -i --restart='Never' --image docker.io/bitnami/mysql:8.0.35-debian-11-r0 --namespace middleware --env MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD --command -- bash
mysql -h mysql.middleware.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
部署应用到Kubernetes
创建、初始化 数据库
1、创建数据库ry-cloud
并导入数据脚本ry_2021xxxx.sql
(必须),quartz.sql(可选) 2、创建数据库ry-config
并导入数据脚本ry_config_2021xxxx.sql
(必须)
Bash
[root@node-01 sql]# kubectl -n middleware cp ry_20231130.sql mysql-0:/tmp/
[root@node-01 sql]#
[root@node-01 sql]# kubectl -n middleware cp ry_config_20231204.sql mysql-0:/tmp/
[root@node-01 sql]# MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace middleware mysql -o jsonpath="{.data.mysql-root-password}" | base64 -d)
[root@node-01 sql]# kubectl -n middleware exec -it mysql-0 -- bash
I have no name!@mysql-client:/$ cd /tmp/
I have no name!@mysql-client:/tmp$ ls
bitnami ry_20231130.sql ry_config_20231204.sql
I have no name!@mysql-client:/$ mysql -h mysql.middleware.svc.cluster.local -uroot -p"$MYSQL_ROOT_PASSWORD"
# 创建 若依 数据库 ry-cloud
CREATE DATABASE `ry-cloud` CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
CREATE USER 'ruoyi'@'%' IDENTIFIED BY 'oweBohC0wzePaNoGUDqYzel#ond';
GRANT ALL PRIVILEGES ON `ry-cloud`.* TO 'ruoyi'@'%';
mysql> use ry-cloud;
Database changed
mysql>
mysql> source ry_20231130.sql (kubectl cp )
# 导入 若依 Nacos 配置文件
mysql> source ry_config_20231204.sql
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| nacos |
| performance_schema |
| ry-cloud |
| ry-config |
| sys |
+--------------------+
7 rows in set (0.00 sec)
GRANT ALL PRIVILEGES ON `ry-config`.* TO 'ruoyi'@'%';
更新 nacos 连接数据库信息
Go
helm -n middleware uninstall nacos
helm upgrade --install nacos -f helm/ruoyi-values.yaml ./helm/ --create-namespace --namespace middleware
修改应用配置文件
Bash
ruoyi-gateway: /data/project/RuoYi-Cloud/ruoyi-gateway/src/main/resources/bootstrap.yml
ruoyi-auth: /data/project/RuoYi-Cloud/ruoyi-auth/src/main/resources/bootstrap.yml
ruoyi-system: /data/project/RuoYi-Cloud/ruoyi-modules/ruoyi-system/src/main/resources/bootstrap.yml
nacos:
discovery:
# 服务注册地址
server-addr: nacos-cs.middleware.svc.cluster.local:8848
config:
# 配置中心地址
server-addr: nacos-cs.middleware.svc.cluster.local:8848
# 配置文件格式
file-extension: yml
# 共享配置
shared-configs:
- application-${spring.profiles.active}.${spring.cloud.nacos.config.file-extension}
中间件配置信息
Go
redis-master.middleware.svc.cluster.local
iR+YlGOReK0H0duqGywmivum$is
mysql-headless.middleware.svc.cluster.local
ruoyi
oweBohC0wzePaNoGUDqYzel#ond
编写应用部署 YAML
yaml
RuoYiGatewayApplication (网关模块 必须) ruoyi-gateway
RuoYiAuthApplication (认证模块 必须) ruoyi-auth
RuoYiSystemApplication (系统模块 必须) ruoyi-modules-system
前端 ruoyi-ui
apiVersion: apps/v1
kind: Deployment
metadata:
name: ruoyi-gateway
labels:
app: ruoyi-gateway
spec:
replicas: 1
minReadySeconds: 30
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
selector:
matchLabels:
app: ruoyi-gateway
template:
metadata:
labels:
app: ruoyi-gateway
spec:
containers:
- name: ruoyi-gateway
image: ruoyi-gateway:1.0.2
env:
- name: JAVA_OPTS
value: -Xms1024M -Xmx1024M
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 500m
memory: 1024Mi
startupProbe:
periodSeconds: 5
tcpSocket:
port: 8080
livenessProbe:
periodSeconds: 10
tcpSocket:
port: 8080
readinessProbe:
periodSeconds: 15
tcpSocket:
port: 8080
ports:
- containerPort: 8080
name: http
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: ruoyi-gateway-svc
labels:
spec:
selector:
app: ruoyi-gateway
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
[root@node-01 k8s]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ruoyi-auth-7794bdfff4-g7hrj 1/1 Running 0 2d
ruoyi-gateway-5894d46cfb-lpmkb 1/1 Running 1 (2d ago) 2d
ruoyi-modules-system-55b597b4b-4ndqk 1/1 Running 0 2d
ruoyi-ui-84f465796c-69cmp 1/1 Running 0 2d
[root@node-01 k8s]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ruoyi-auth-svc ClusterIP 10.98.224.181 <none> 8080/TCP 3d5h
ruoyi-gateway-svc ClusterIP 10.97.230.242 <none> 8080/TCP 3d5h
ruoyi-modules-system-svc ClusterIP 10.108.171.173 <none> 8080/TCP 3d4h
ruoyi-ui-svc ClusterIP 10.105.39.43 <none> 80/TCP 2d
修改 ruoyi-ui-svc Type 为 NodePort
访问 ruoyi-ui
账号: admin 密码: admin123
![](https://img-home.csdnimg.cn/images/20230724024159.png)
Jenkins 部署应用
https://www.jenkins.io/download/
Bash
wget -O /etc/yum.repos.d/jenkins.repo https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate
rpm --import https://pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
# Add required dependencies for the jenkins package
sudo yum install fontconfig
sudo yum install jenkins-2.361.4-1.1.noarch
sudo systemctl daemon-reload
jenkins-2.361.4
![](https://img-home.csdnimg.cn/images/20230724024159.png)
https://www.jenkins-zh.cn/tutorial/management/plugin/update-center/
如何git推送自己的源码
shell
#添加自己的邮箱和账号
git config --global user.name "ss190720173"
git config --global user.email "3397316724@qq.com"
#cd 到要上传的源码目录 初始化
cd ruoyi-cloud
git init
#添加和提交代码
git add .
git commit -m "Initial commit"
#在gitee上创建仓库
![](https://img-home.csdnimg.cn/images/20230724024159.png)
![](https://img-home.csdnimg.cn/images/20230724024159.png)
shell
#在回到服务器把远程仓库进行添加
git remote add origin https://gitee.com/ss190720173/ruo-yi-cloud.git
#提交代码 第一次要输入用户名和密码请注意
git push -u origin master
#上传完成
#接下来在实际工作中
git push# 开发上传
git clone #克隆仓库
git pull # 运维下载最新代码