karmada和vcluster联合作战 [1]

vcluster

安装 vcluster命令

bash 复制代码
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.24.0-rc.3/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

vcluster --version

初始化虚拟集群

vcluster create kaka-k3 -f vcluster.yaml

cat vcluster.yaml

yaml 复制代码
sync:
  toHost:
    persistentVolumes: #同步pv到host集群
      enabled: true
    persistentVolumeClaims:#同步pvc到host集群
      enabled: true  
  fromHost:
    storageClasses: #同步host集群的SC
      enabled: true
controlPlane:
  distro:
    k3s: #使用k3s作为虚拟集群的控制面
      enabled: true

使用相同的命令创建另外一个集群。

集群列表

我们将会使用kaka-k3kaka-k3s-1 模拟 karmada管理的多集群member.

这里也可以使用以下命令来相互切换vcluster的context

arduino 复制代码
vcluster ls
vcluster connect kaka-k3s
vcluster disconnect

karmada

安装

这里我们选择 karmadactl 命令行初始化karmada集群

先下载karmadactl

bash 复制代码
wget https://github.com/karmada-io/karmada/releases/download/v1.13.0/karmadactl-linux-arm64.tgz
tar xvf karmadactl-linux-arm64.tgz
chmod +x karmadactl
mv karmadactl /usr/local/bin/
karmadactl --version

karmadactl init --config karmada-init.yaml

yaml 复制代码
apiVersion: config.karmada.io/v1alpha1
kind: KarmadaInitConfig
spec:
  karmadaCrds: "https://github.com/karmada-io/karmada/releases/download/v1.10.3/crds.tar.gz"
  etcd:
    local:
      imageRepository: "registry.k8s.io/etcd"
      imageTag: "3.5.13-0"
      initImage:
        imageRepository: "docker.io/library/alpine"
        imageTag: "3.19.1"
  components:
    karmadaAPIServer:
      imageRepository: "registry.k8s.io/kube-apiserver"
      imageTag: "v1.30.0"
    karmadaAggregatedAPIServer:
      imageRepository: "docker.io/karmada/karmada-aggregated-apiserver"
      imageTag: "v1.10.3"
    kubeControllerManager:
      imageRepository: "registry.k8s.io/kube-controller-manager"
      imageTag: "v1.30.0"
    karmadaControllerManager:
      imageRepository: "docker.io/karmada/karmada-controller-manager"
      imageTag: "v1.10.3"
    karmadaScheduler:
      imageRepository: "docker.io/karmada/karmada-scheduler"
      imageTag: "v1.10.3"
    karmadaWebhook:
      imageRepository: "docker.io/karmada/karmada-webhook"
      imageTag: "v1.10.3"

安装过程:

可见karmada创建了自己的api-server,scheduler,controller-manager和etcd等。

集群加入

加入模式有push模式,pull模式,因为我们使用的集群是vcluster的虚拟集群,只能使用pull模式加入,需要在虚拟集群启动agent。

shell 复制代码
#切换到虚拟集群视角
vcluster connect kaka-k3

# 创建token
karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config

# 注册到karmada
karmadactl register 192.168.64.5:32443 --token 6mmyse.v6upph4subbqmsva --discovery-token-ca-cert-hash sha256:c9f6b3ebd53b786231a1c8d37744e6749696619b65e60d71762446deb8f6de0b --cluster-name=vcluster-1

相同的步骤对kaka-k3-1也执行一遍。之后在karmada视角下,看到新加入后的2个集群:vcluster-1,vcluster-2

映射关系如下:

vcluster name karmada name on host
kaka-k3 vcluster-1 kubernetes-admin@kubernetes
kaka-k3-1 vcluster-2 kubernetes-admin@kubernetes

propagationpolicy

创建传播策略:

arduino 复制代码
kubectl apply -f propagationpolicy.yaml --kubeconfig /etc/karmada/karmada-apiserver.config

cat propagationpolicy.yaml

yaml 复制代码
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy # The default namespace is `default`.
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope.
  placement:
    clusterAffinity:
      clusterNames:
        - vcluster-1
        - vcluster-2
    replicaScheduling:
      replicaSchedulingType: Divided  # 启用副本分割策略
      replicaDivisionPreference: Weighted  # 按权重分配
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - vcluster-1
            weight: 1
          - targetCluster:
              clusterNames:
                - vcluster-2
            weight: 1 

nginx-deployment

创建测试deployment

arduino 复制代码
kubectl apply -f nginx-deployment.yaml --kubeconfig /etc/karmada/karmada-apiserver.config
vbnet 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"propagationpolicy.karmada.io/name":"example-policy","propagationpolicy.karmada.io/namespace":"default"},"creationTimestamp":"2025-03-14T05:05:27Z","generation":2,"labels":{"app":"nginx","propagationpolicy.karmada.io/permanent-id":"30de7c21-df15-4d42-8d67-f0e04db251b0"},"name":"nginx","namespace":"default","resourceVersion":"3874","uid":"d8511775-9b09-48c8-9a51-3ccf60f39e19"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"observedGeneration":2,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
    propagationpolicy.karmada.io/name: example-policy
    propagationpolicy.karmada.io/namespace: default
  creationTimestamp: "2025-03-14T05:05:27Z"
  generation: 3
  labels:
    app: nginx
    propagationpolicy.karmada.io/permanent-id: 30de7c21-df15-4d42-8d67-f0e04db251b0
  name: nginx
  namespace: default
  resourceVersion: "7635"
  uid: d8511775-9b09-48c8-9a51-3ccf60f39e19
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

查看部署分布

可见nginx Deployment 共计2个副本,一个分配到了集群vcluster-kaka-k3s,一个副本被分配到了集群vcluster-kaka-k3s-1

相关推荐
小王子10241 分钟前
Django+DRF 实战:自定义异常处理流程
后端·django·web开发
考虑考虑5 分钟前
go中的切片
后端·go
天南星1 小时前
java-WebSocket在Java生态中的发展历程
java·后端·websocket
工藤学编程1 小时前
分库分表之实战-sharding-JDBC绑定表配置实战
数据库·分布式·后端·sql·mysql
fmvrj342022 小时前
云原生:数字化转型的核心引擎
后端
码出极致2 小时前
Redisson分布式缓存与数据一致性保障
后端
用户790349033712 小时前
springboot集成redisson实现redis分布式锁
后端
陈随易2 小时前
程序员的新玩具,MoonBit(月兔)编程语言科普
前端·后端·程序员
码出极致2 小时前
Redisson秒杀系统中的分布式锁应用
后端
xiaok2 小时前
@Param注解的作用
java·后端