k8s集群利用svc,ep代理另一个集群的svc服务,让集群正常调用

前提:两套集群的网络互通

背景:客户给两套集群,不通网络环境,但是两套集群中服务需要有调用故用此方法

A集群(未部署服务a,但是部署了服务b,c,d)

B集群 (只部署了服务a)

1,a集群利用svc代理b集群的a和a1服务

如以下yaml(ep中写对应b集群的物理机地址和端口,svc中写你想起什么端口就写什么端口)

bash 复制代码
idaas-nginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: idaas-nginx
  namespace: common
spec:
  clusterIP: 10.233.31.230
  clusterIPs:
  - 10.233.31.230
  ports:
  - name: nginx
    port: 8676   #idaas-nginx的svc端口,可以自定义
    protocol: TCP
    targetPort: nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
bash 复制代码
idaas-nginx-ep.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: idaas-nginx
  namespace: common
subsets:
- addresses:
  - ip: 172.16.0.2  #idaas集群的物理机地址
  ports:
  - name: nginx
    port: 28082   #idaas集群的idaas-nginx的nodeport端口
    protocol: TCP

直接apply即可

利用已有的svc创建新的svc

kubectl get svc -n xxx xxx -oyaml >> svc-bak.yaml

bash 复制代码
旧的svc

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2024-04-24T08:38:46Z"
  labels:
    app.kubernetes.io/component: rabbitmq
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/part-of: rabbitmq
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:spec:
        f:externalTrafficPolicy: {}
    manager: kubectl-edit
    operation: Update
    time: "2024-05-13T06:29:57Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:labels:
          .: {}
          f:app.kubernetes.io/component: {}
          f:app.kubernetes.io/name: {}
          f:app.kubernetes.io/part-of: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"8c6deebb-d6ec-4fce-8810-f04c08746a58"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:ports:
          .: {}
          k:{"port":5552,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":5672,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":8552,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":8672,"protocol":"TCP"}:
            .: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":15672,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
          k:{"port":15692,"protocol":"TCP"}:
            .: {}
            f:appProtocol: {}
            f:name: {}
            f:port: {}
            f:protocol: {}
            f:targetPort: {}
        f:selector:
          .: {}
          f:app.kubernetes.io/name: {}
        f:sessionAffinity: {}
        f:type: {}
    manager: manager
    operation: Update
    time: "2024-05-13T06:29:57Z"
  name: rabbitmq
  namespace: rabbitmq-system
  ownerReferences:
  - apiVersion: rabbitmq.com/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: RabbitmqCluster
    name: rabbitmq
    uid: 8c6deebb-d6ec-4fce-8810-f04c08746a58
  resourceVersion: "6871272"
  uid: 5b6416e9-fc46-43ac-9315-d9bb58f79082
spec:
  clusterIP: 10.233.21.224
  clusterIPs:
  - 10.233.21.224
  ports:
  - name: other-port
    port: 8672
    protocol: TCP
    targetPort: 8672
  - name: stream-port
    port: 8552
    protocol: TCP
    targetPort: 8552
  - appProtocol: amqp
    name: amqp
    port: 5672
    protocol: TCP
    targetPort: 5672
  - appProtocol: http
    name: management
    port: 15672
    protocol: TCP
    targetPort: 15672
  - appProtocol: rabbitmq.com/stream
    name: stream
    port: 5552
    protocol: TCP
    targetPort: 5552
  - appProtocol: prometheus.io/metrics
    name: prometheus
    port: 15692
    protocol: TCP
    targetPort: 15692
  selector:
    app.kubernetes.io/name: rabbitmq
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
bash 复制代码
新的svc,基于旧的去修改即可
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: rabbitmq
    app.kubernetes.io/name: rabbitmq
    app.kubernetes.io/part-of: rabbitmq
  name: rabbitmq-new
  namespace: rabbitmq-system
spec:
  clusterIP: 
  ports:
  - name: other-port
    port: 8672
    protocol: TCP
    targetPort: 8672
  - name: stream-port
    port: 8552
    protocol: TCP
    targetPort: 8552
  - appProtocol: amqp
    name: amqp
    port: 5672
    protocol: TCP
    targetPort: 5672
  - appProtocol: http
    name: management
    port: 15672
    protocol: TCP
    targetPort: 15672
  - appProtocol: rabbitmq.com/stream
    name: stream
    port: 5552
    protocol: TCP
    targetPort: 5552
  - appProtocol: prometheus.io/metrics
    name: prometheus
    port: 15692
    protocol: TCP
    targetPort: 15692
  selector:
    app.kubernetes.io/name: rabbitmq
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

修改完后直接kubectl apply -f xxx.yaml即可

相关推荐
蜜獾云26 分钟前
docker 安装雷池WAF防火墙 守护Web服务器
linux·运维·服务器·网络·网络安全·docker·容器
年薪丰厚2 小时前
如何在K8S集群中查看和操作Pod内的文件?
docker·云原生·容器·kubernetes·k8s·container
zhangj11252 小时前
K8S Ingress 服务配置步骤说明
云原生·容器·kubernetes
岁月变迁呀2 小时前
kubeadm搭建k8s集群
云原生·容器·kubernetes
墨水\\2 小时前
二进制部署k8s
云原生·容器·kubernetes
Source、2 小时前
k8s-metrics-server
云原生·容器·kubernetes
上海运维Q先生2 小时前
面试题整理15----K8s常见的网络插件有哪些
运维·网络·kubernetes
颜淡慕潇2 小时前
【K8S问题系列 |19 】如何解决 Pod 无法挂载 PVC问题
后端·云原生·容器·kubernetes
ProtonBase2 小时前
如何从 0 到 1 ,打造全新一代分布式数据架构
java·网络·数据库·数据仓库·分布式·云原生·架构
大熊程序猿4 小时前
K8s证书过期
云原生·容器·kubernetes