k8s集群利用svc,ep代理另一个集群的svc服务,让集群正常调用

前提:两套集群的网络互通

背景:客户给两套集群,不通网络环境,但是两套集群中服务需要有调用故用此方法

A集群(未部署服务a,但是部署了服务b,c,d)

B集群 (只部署了服务a)

1,a集群利用svc代理b集群的a和a1服务

如以下yaml(ep中写对应b集群的物理机地址和端口,svc中写你想起什么端口就写什么端口)

bash 复制代码
idaas-nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: idaas-nginx
  namespace: common
spec:
  clusterIP: 10.233.31.230
  clusterIPs:
  - 10.233.31.230
  ports:
  - name: nginx
    port: 8676   #idaas-nginx的svc端口,可以自定义
    protocol: TCP
    targetPort: nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
bash 复制代码
idaas-nginx-ep.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: idaas-nginx
  namespace: common
subsets:
- addresses:
  - ip: 172.16.0.2  #idaas集群的物理机地址
  ports:
  - name: nginx
    port: 28082   #idaas集群的idaas-nginx的nodeport端口
    protocol: TCP

直接apply即可

利用已有的svc创建新的svc

kubectl get svc -n xxx xxx -oyaml >> svc-bak.yaml

bash 复制代码
旧的svc

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-04-24T08:38:46Z"
  labels:
    app.kubernetes.io/component: rabbitmq
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/part-of: rabbitmq
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:externalTrafficPolicy: {}
    manager: kubectl-edit
    operation: Update
    time: "2024-05-13T06:29:57Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app.kubernetes.io/component: {}
          f:app.kubernetes.io/name: {}
          f:app.kubernetes.io/part-of: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"8c6deebb-d6ec-4fce-8810-f04c08746a58"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:ports:
          .: {}
          k:{"port":5552,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":5672,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":8552,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":8672,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":15672,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":15692,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector:
          .: {}
          f:app.kubernetes.io/name: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: manager
    operation: Update
    time: "2024-05-13T06:29:57Z"
  name: rabbitmq
  namespace: rabbitmq-system
  ownerReferences:
  - apiVersion: rabbitmq.com/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: RabbitmqCluster
    name: rabbitmq
    uid: 8c6deebb-d6ec-4fce-8810-f04c08746a58
  resourceVersion: "6871272"
  uid: 5b6416e9-fc46-43ac-9315-d9bb58f79082
spec:
  clusterIP: 10.233.21.224
  clusterIPs:
  - 10.233.21.224
  ports:
  - name: other-port
    port: 8672
    protocol: TCP
    targetPort: 8672
  - name: stream-port
    port: 8552
    protocol: TCP
    targetPort: 8552
  - appProtocol: amqp
    name: amqp
    port: 5672
    protocol: TCP
    targetPort: 5672
  - appProtocol: http
    name: management
    port: 15672
    protocol: TCP
    targetPort: 15672
  - appProtocol: rabbitmq.com/stream
    name: stream
    port: 5552
    protocol: TCP
    targetPort: 5552
  - appProtocol: prometheus.io/metrics
    name: prometheus
    port: 15692
    protocol: TCP
    targetPort: 15692
  selector:
    app.kubernetes.io/name: rabbitmq
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
bash 复制代码
新的svc,基于旧的去修改即可
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: rabbitmq
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/part-of: rabbitmq
  name: rabbitmq-new
  namespace: rabbitmq-system
spec:
  clusterIP: 
  ports:
  - name: other-port
    port: 8672
    protocol: TCP
    targetPort: 8672
  - name: stream-port
    port: 8552
    protocol: TCP
    targetPort: 8552
  - appProtocol: amqp
    name: amqp
    port: 5672
    protocol: TCP
    targetPort: 5672
  - appProtocol: http
    name: management
    port: 15672
    protocol: TCP
    targetPort: 15672
  - appProtocol: rabbitmq.com/stream
    name: stream
    port: 5552
    protocol: TCP
    targetPort: 5552
  - appProtocol: prometheus.io/metrics
    name: prometheus
    port: 15692
    protocol: TCP
    targetPort: 15692
  selector:
    app.kubernetes.io/name: rabbitmq
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

修改完后直接kubectl apply -f xxx.yaml即可

相关推荐
梅见十柒1 小时前
wsl2中kali linux下的docker使用教程(教程总结)
linux·经验分享·docker·云原生
Python私教2 小时前
ubuntu搭建k8s环境详细教程
linux·ubuntu·kubernetes
运维&陈同学3 小时前
【zookeeper01】消息队列与微服务之zookeeper工作原理
运维·分布式·微服务·zookeeper·云原生·架构·消息队列
O&REO3 小时前
单机部署kubernetes环境下Overleaf-基于MicroK8s的Overleaf应用部署指南
云原生·容器·kubernetes
politeboy3 小时前
k8s启动springboot容器的时候,显示找不到application.yml文件
java·spring boot·kubernetes
运维小文4 小时前
K8S资源限制之LimitRange
云原生·容器·kubernetes·k8s资源限制
登云时刻4 小时前
Kubernetes集群外连接redis集群和使用redis-shake工具迁移数据(二)
redis·容器·kubernetes
wuxingge13 小时前
k8s1.30.0高可用集群部署
云原生·容器·kubernetes
志凌海纳SmartX14 小时前
趋势洞察|AI 能否带动裸金属 K8s 强势崛起?
云原生·容器·kubernetes
锅总14 小时前
nacos与k8s service健康检查详解
云原生·容器·kubernetes