问题
我需要在k8s集群里面部署springboot服务,通过k8s ingress访问集群内部的springboot服务,应该怎么做?
这里假设已经准备好k8s集群,而且也准备好springboot服务的运行镜像了。这里我们将精力放在k8s服务编排上面。
一图胜千言
上图来自于kubernetes的ingress教程。接下来,我们按照上述部署1个ingress+2个服务。
service1
先用kubectl命令创建一个deployment.yaml和service.yaml,然后,将这两个内容合并到一个文件中,即service1.yaml
。具体命令如下:
创建deployment.yaml:
bash
kubectl create deployment service1 --image xxx.dkr.ecr.us-east-1.amazonaws.com/service1:latest -o yaml --dry-run=client > k8s/deployment.yaml
创建service.yaml:
bash
kubectl create service clusterip service1 --tcp 8080:8080 -o yaml --dry-run=client > k8s/service.yaml
根据自己需求,去掉一下不要的内容,调整相关配置,合并成如下内容:
service1.yaml
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: service1
name: service1
spec:
replicas: 2
selector:
matchLabels:
app: service1
template:
metadata:
labels:
app: service1
spec:
containers:
- image: xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/service1:latest
name: service1
resources:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "2Gi"
cpu: "2"
# 准备检查,通过则接入流量
readinessProbe:
httpGet:
path: /foo/actuator/health
port: 8080
# 活力检查,不通过时重启容器
livenessProbe:
httpGet:
path: /foo/actuator/health
port: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: service1
name: service1
spec:
ports:
- name: http
port: 4200
targetPort: 4200
selector:
app: service1
type: ClusterIP
service2
按之前service1方式,获得如下内容:
service2.yaml
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: service2
name: service2
spec:
replicas: 2
selector:
matchLabels:
app: service2
template:
metadata:
labels:
app: service2
spec:
containers:
- image: xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/service2:latest
name: service2
resources:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "2Gi"
cpu: "2"
# 准备检查,通过则接入流量
readinessProbe:
httpGet:
path: /bar/actuator/health
port: 8080
# 活力检查,不通过时重启容器
livenessProbe:
httpGet:
path: /bar/actuator/health
port: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: service2
name: service2
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: service2
type: ClusterIP
ingress
使用kubectl命令获得ingress基本配置,如下命令:
bash
kubectl create ingress ingress --rule="/path=service1:8080" -o yaml --dry-run=client > k8s/ingress.yaml
根据自己的需求调整后的内容如下:
ingress.yaml
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- backend:
service:
name: service1
port:
number: 4200
path: /foo
pathType: Prefix
- backend:
service:
name: service2
port:
number: 8080
path: /bar
pathType: Prefix
这里有个问题,由于我现在使用的aws云,所以,这k8s ingress在aws云环境下面,需要针对这种情况调整aws云相关配置。
AWS EKS配置AWS Load Balancer Controller
为集群创建 IAM OIDC 提供商
找到现有集群的OpenID Connect 提供商 URL值,点击copy,如下图:
然后,回到IAM主页,为集群创建 IAM OIDC 提供商,具体如下:
创建提供商如下图:
AWS Load Balancer Controller 部署到EKS
创建AWSLoadBalancerControllerIAMPolicy策略
我这里用到aws云区是普通云区,所以,这里使用的aws-load-balancer-controller的策略脚本如下:
https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
下载命令如下:
bash
curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json
新建一个策略:
bash
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam-policy.json
创建一个ServiceAccount给k8s
bash
eksctl create iamserviceaccount \
--cluster=<cluster-name> \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
--override-existing-serviceaccounts \
--region <region-code> \
--approve
这里用到eksctl
命令,给k8s集群创建一个ServiceAccount服务账号aws-load-balancer-controller
,并使用上面之前创建的权限策略。怎么安装eksctl
命令,可以看看官网,这里就不提了。
helm安装aws-load-balancer-controller
这里假设我们已经会使用k8s集群的包管理器helm了。
添加EKS资源库到helm,如下命令:
bash
helm repo add eks https://aws.github.io/eks-charts
更新本地资源库,如下命令:
bash
helm repo update eks
安装aws-load-balancer-controller,如下命令:
bash
helm install aws-load-balancer-controller eks/aws-load-balancer-controller --set clusterName=my-cluster -n kube-system --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
等待一段时间出现,如下反馈,说明aws-load-balancer-controller安装成功:
bash
NAME: aws-load-balancer-controller
LAST DEPLOYED: Thu Mar 7 15:11:01 2024
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
AWS Load Balancer controller installed!
如果出现,如下错误:
Error: INSTALLATION FAILED: cannot re-use a name that is still in use
说明,需要先卸载,再安装,具体命令如下:
bash
helm delete aws-load-balancer-controller -n kube-system
检查k8s集群中aws-load-balancer-controller是否安装成功,具体命令如下:
bash
kubectl get deployment -n kube-system aws-load-balancer-controller
安装成功示例,如下:
bash
NAME READY UP-TO-DATE AVAILABLE AGE
aws-load-balancer-controller 2/2 2 2 10m
调整ingress配置
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# alb名称
alb.ingress.kubernetes.io/load-balancer-name: apg2
# 内网
alb.ingress.kubernetes.io/scheme: internal
# 流量路由到pod层面
alb.ingress.kubernetes.io/target-type: ip
spec:
# 使用alb作为ingress默认类
ingressClassName: alb
rules:
- http:
paths:
- backend:
service:
name: service1
port:
number: 4200
path: /foo
pathType: Prefix
- backend:
service:
name: service2
port:
number: 8080
path: /bar
pathType: Prefix
调整service配置
除了再ingress里面添加lbc注解之外,还需要再service中添加健康检查的lbc注解:
bash
annotations:
alb.ingress.kubernetes.io/healthcheck-path: /api/demo/actuator/health
service1.yaml
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: service1
name: service1
spec:
replicas: 2
selector:
matchLabels:
app: service1
template:
metadata:
labels:
app: service1
spec:
containers:
- image: xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/service1:latest
name: service1
resources:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "2Gi"
cpu: "2"
# 准备检查,通过则接入流量
readinessProbe:
httpGet:
path: /foo/actuator/health
port: 8080
# 活力检查,不通过时重启容器
livenessProbe:
httpGet:
path: /foo/actuator/health
port: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: service1
name: service1
annotations:
# aws目标组健康检查
alb.ingress.kubernetes.io/healthcheck-path: /for/actuator/health
spec:
ports:
- name: http
port: 4200
targetPort: 4200
selector:
app: service1
type: ClusterIP
service2.yaml
yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: service2
name: service2
spec:
replicas: 2
selector:
matchLabels:
app: service2
template:
metadata:
labels:
app: service2
spec:
containers:
- image: xxxxxxxxxxxx.dkr.ecr.us-east-1.amazonaws.com/service2:latest
name: service2
resources:
requests:
memory: "2Gi"
cpu: "2"
limits:
memory: "2Gi"
cpu: "2"
# 准备检查,通过则接入流量
readinessProbe:
httpGet:
path: /bar/actuator/health
port: 8080
# 活力检查,不通过时重启容器
livenessProbe:
httpGet:
path: /bar/actuator/health
port: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
app: service2
name: service2
annotations:
# aws目标组健康检查
alb.ingress.kubernetes.io/healthcheck-path: /bar/actuator/health
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: service2
type: ClusterIP
部署
bash
kubectl apply -f ./k8s
清除资源:
bash
kubectl delete -f ./k8s
总结
AWS Load Balancer Controller没有重写路径功能,注意安全。这里只介绍的主要是EKS创建ALB在私有VPC内部访问。这里没有介绍CDN套在API接口外面的情况,一般来说,预算足够的情况下面,都会在API接口外面套一层CDN服务。需要注意的是AWS CloudFront(CDN服务)只支持公网的LB。不知道什么原因维护AWS Load Balancer Controller(LBC)团队的人,死活不肯提供重写路径功能。这里还没有服务监控,有机会再介绍介绍吧!
下面是公有ingress创建ALB的配置:
yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# alb名称
alb.ingress.kubernetes.io/load-balancer-name: apg2
# 只让cdn(CloudFront)访问负载均衡器
alb.ingress.kubernetes.io/security-groups: cloudfront-only
# pod和node安全组自动生成
alb.ingress.kubernetes.io/manage-backend-security-group-rules: "true"
# 公网
alb.ingress.kubernetes.io/scheme: internet-facing
# 流量路由到pod层面
alb.ingress.kubernetes.io/target-type: ip
spec:
# 使用alb作为ingress默认类
ingressClassName: alb
rules:
- http:
paths:
- backend:
service:
name: service1
port:
number: 4200
path: /foo
pathType: Prefix
- backend:
service:
name: service2
port:
number: 8080
path: /bar
pathType: Prefix
就这样吧,ingress用http端口,然后限制只有cdn节点才能访问,这样公网alb就相对安全了一些。加上前面有cdn的话,基本上没人知道真实的alb地址。