karmada和vcluster联合作战 [1]

vcluster

安装 vcluster命令

bash 复制代码
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.24.0-rc.3/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

vcluster --version

初始化虚拟集群

vcluster create kaka-k3 -f vcluster.yaml

cat vcluster.yaml

yaml 复制代码
sync:
  toHost:
    persistentVolumes: #同步pv到host集群
      enabled: true
    persistentVolumeClaims:#同步pvc到host集群
      enabled: true  
  fromHost:
    storageClasses: #同步host集群的SC
      enabled: true
controlPlane:
  distro:
    k3s: #使用k3s作为虚拟集群的控制面
      enabled: true

使用相同的命令创建另外一个集群。

集群列表

我们将会使用kaka-k3kaka-k3s-1 模拟 karmada管理的多集群member.

这里也可以使用以下命令来相互切换vcluster的context

arduino 复制代码
vcluster ls
vcluster connect kaka-k3s
vcluster disconnect

karmada

安装

这里我们选择 karmadactl 命令行初始化karmada集群

先下载karmadactl

bash 复制代码
wget https://github.com/karmada-io/karmada/releases/download/v1.13.0/karmadactl-linux-arm64.tgz
tar xvf karmadactl-linux-arm64.tgz
chmod +x karmadactl
mv karmadactl /usr/local/bin/
karmadactl --version

karmadactl init --config karmada-init.yaml

yaml 复制代码
apiVersion: config.karmada.io/v1alpha1
kind: KarmadaInitConfig
spec:
  karmadaCrds: "https://github.com/karmada-io/karmada/releases/download/v1.10.3/crds.tar.gz"
  etcd:
    local:
      imageRepository: "registry.k8s.io/etcd"
      imageTag: "3.5.13-0"
      initImage:
        imageRepository: "docker.io/library/alpine"
        imageTag: "3.19.1"
  components:
    karmadaAPIServer:
      imageRepository: "registry.k8s.io/kube-apiserver"
      imageTag: "v1.30.0"
    karmadaAggregatedAPIServer:
      imageRepository: "docker.io/karmada/karmada-aggregated-apiserver"
      imageTag: "v1.10.3"
    kubeControllerManager:
      imageRepository: "registry.k8s.io/kube-controller-manager"
      imageTag: "v1.30.0"
    karmadaControllerManager:
      imageRepository: "docker.io/karmada/karmada-controller-manager"
      imageTag: "v1.10.3"
    karmadaScheduler:
      imageRepository: "docker.io/karmada/karmada-scheduler"
      imageTag: "v1.10.3"
    karmadaWebhook:
      imageRepository: "docker.io/karmada/karmada-webhook"
      imageTag: "v1.10.3"

安装过程:

可见karmada创建了自己的api-server,scheduler,controller-manager和etcd等。

集群加入

加入模式有push模式,pull模式,因为我们使用的集群是vcluster的虚拟集群,只能使用pull模式加入,需要在虚拟集群启动agent。

shell 复制代码
#切换到虚拟集群视角
vcluster connect kaka-k3

# 创建token
karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config

# 注册到karmada
karmadactl register 192.168.64.5:32443 --token 6mmyse.v6upph4subbqmsva --discovery-token-ca-cert-hash sha256:c9f6b3ebd53b786231a1c8d37744e6749696619b65e60d71762446deb8f6de0b --cluster-name=vcluster-1

相同的步骤对kaka-k3-1也执行一遍。之后在karmada视角下,看到新加入后的2个集群:vcluster-1,vcluster-2

映射关系如下:

vcluster name karmada name on host
kaka-k3 vcluster-1 kubernetes-admin@kubernetes
kaka-k3-1 vcluster-2 kubernetes-admin@kubernetes

propagationpolicy

创建传播策略:

arduino 复制代码
kubectl apply -f propagationpolicy.yaml --kubeconfig /etc/karmada/karmada-apiserver.config

cat propagationpolicy.yaml

yaml 复制代码
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy # The default namespace is `default`.
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope.
  placement:
    clusterAffinity:
      clusterNames:
        - vcluster-1
        - vcluster-2
    replicaScheduling:
      replicaSchedulingType: Divided  # 启用副本分割策略
      replicaDivisionPreference: Weighted  # 按权重分配
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - vcluster-1
            weight: 1
          - targetCluster:
              clusterNames:
                - vcluster-2
            weight: 1 

nginx-deployment

创建测试deployment

arduino 复制代码
kubectl apply -f nginx-deployment.yaml --kubeconfig /etc/karmada/karmada-apiserver.config
vbnet 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"propagationpolicy.karmada.io/name":"example-policy","propagationpolicy.karmada.io/namespace":"default"},"creationTimestamp":"2025-03-14T05:05:27Z","generation":2,"labels":{"app":"nginx","propagationpolicy.karmada.io/permanent-id":"30de7c21-df15-4d42-8d67-f0e04db251b0"},"name":"nginx","namespace":"default","resourceVersion":"3874","uid":"d8511775-9b09-48c8-9a51-3ccf60f39e19"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"observedGeneration":2,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
    propagationpolicy.karmada.io/name: example-policy
    propagationpolicy.karmada.io/namespace: default
  creationTimestamp: "2025-03-14T05:05:27Z"
  generation: 3
  labels:
    app: nginx
    propagationpolicy.karmada.io/permanent-id: 30de7c21-df15-4d42-8d67-f0e04db251b0
  name: nginx
  namespace: default
  resourceVersion: "7635"
  uid: d8511775-9b09-48c8-9a51-3ccf60f39e19
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

查看部署分布

可见nginx Deployment 共计2个副本,一个分配到了集群vcluster-kaka-k3s,一个副本被分配到了集群vcluster-kaka-k3s-1

相关推荐
dylan_QAQ7 分钟前
【附录】BeanFactoryPostProcessor的作用时机与核心实现?
后端·spring
熊猫片沃子19 分钟前
MyBatis 中 where1=1 一些替换方式
java·后端·mybatis
码事漫谈21 分钟前
C/C++ 宏中 `do { ... } while (0)` 的“零次循环”技巧
后端
it自23 分钟前
SpringMVC在前后端分离架构中的执行流程详解
java·spring boot·后端·spring·架构
咕噜签名分发可爱多28 分钟前
网站下载落地页有的地区打的开有的地区打不开如何排查问题
后端
dylan_QAQ29 分钟前
【附录】在spring中BeanDefinition 来源是由哪些?如何理解 BeanDefinition ,他在spring中起到了什么作用?
后端·spring
码事漫谈32 分钟前
CRT调试堆检测:从原理到实战的资源泄漏排查指南
后端
Java中文社群33 分钟前
必看!导致事务失效的7大典型场景!
java·后端·面试
_祝你今天愉快38 分钟前
HashMap 底层原理 (JDK 1.8 源码分析)
android·java·后端
曲莫终1 小时前
JAVA原生Servlet支持SSE
前端·后端