karmada和vcluster联合作战 [1]

vcluster

安装 vcluster命令

bash 复制代码
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.24.0-rc.3/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

vcluster --version

初始化虚拟集群

vcluster create kaka-k3 -f vcluster.yaml

cat vcluster.yaml

yaml 复制代码
sync:
  toHost:
    persistentVolumes: #同步pv到host集群
      enabled: true
    persistentVolumeClaims:#同步pvc到host集群
      enabled: true  
  fromHost:
    storageClasses: #同步host集群的SC
      enabled: true
controlPlane:
  distro:
    k3s: #使用k3s作为虚拟集群的控制面
      enabled: true

使用相同的命令创建另外一个集群。

集群列表

我们将会使用kaka-k3kaka-k3s-1 模拟 karmada管理的多集群member.

这里也可以使用以下命令来相互切换vcluster的context

arduino 复制代码
vcluster ls
vcluster connect kaka-k3s
vcluster disconnect

karmada

安装

这里我们选择 karmadactl 命令行初始化karmada集群

先下载karmadactl

bash 复制代码
wget https://github.com/karmada-io/karmada/releases/download/v1.13.0/karmadactl-linux-arm64.tgz
tar xvf karmadactl-linux-arm64.tgz
chmod +x karmadactl
mv karmadactl /usr/local/bin/
karmadactl --version

karmadactl init --config karmada-init.yaml

yaml 复制代码
apiVersion: config.karmada.io/v1alpha1
kind: KarmadaInitConfig
spec:
  karmadaCrds: "https://github.com/karmada-io/karmada/releases/download/v1.10.3/crds.tar.gz"
  etcd:
    local:
      imageRepository: "registry.k8s.io/etcd"
      imageTag: "3.5.13-0"
      initImage:
        imageRepository: "docker.io/library/alpine"
        imageTag: "3.19.1"
  components:
    karmadaAPIServer:
      imageRepository: "registry.k8s.io/kube-apiserver"
      imageTag: "v1.30.0"
    karmadaAggregatedAPIServer:
      imageRepository: "docker.io/karmada/karmada-aggregated-apiserver"
      imageTag: "v1.10.3"
    kubeControllerManager:
      imageRepository: "registry.k8s.io/kube-controller-manager"
      imageTag: "v1.30.0"
    karmadaControllerManager:
      imageRepository: "docker.io/karmada/karmada-controller-manager"
      imageTag: "v1.10.3"
    karmadaScheduler:
      imageRepository: "docker.io/karmada/karmada-scheduler"
      imageTag: "v1.10.3"
    karmadaWebhook:
      imageRepository: "docker.io/karmada/karmada-webhook"
      imageTag: "v1.10.3"

安装过程:

可见karmada创建了自己的api-server,scheduler,controller-manager和etcd等。

集群加入

加入模式有push模式,pull模式,因为我们使用的集群是vcluster的虚拟集群,只能使用pull模式加入,需要在虚拟集群启动agent。

shell 复制代码
#切换到虚拟集群视角
vcluster connect kaka-k3

# 创建token
karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config

# 注册到karmada
karmadactl register 192.168.64.5:32443 --token 6mmyse.v6upph4subbqmsva --discovery-token-ca-cert-hash sha256:c9f6b3ebd53b786231a1c8d37744e6749696619b65e60d71762446deb8f6de0b --cluster-name=vcluster-1

相同的步骤对kaka-k3-1也执行一遍。之后在karmada视角下,看到新加入后的2个集群:vcluster-1,vcluster-2

映射关系如下:

vcluster name karmada name on host
kaka-k3 vcluster-1 kubernetes-admin@kubernetes
kaka-k3-1 vcluster-2 kubernetes-admin@kubernetes

propagationpolicy

创建传播策略:

arduino 复制代码
kubectl apply -f propagationpolicy.yaml --kubeconfig /etc/karmada/karmada-apiserver.config

cat propagationpolicy.yaml

yaml 复制代码
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy # The default namespace is `default`.
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope.
  placement:
    clusterAffinity:
      clusterNames:
        - vcluster-1
        - vcluster-2
    replicaScheduling:
      replicaSchedulingType: Divided  # 启用副本分割策略
      replicaDivisionPreference: Weighted  # 按权重分配
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - vcluster-1
            weight: 1
          - targetCluster:
              clusterNames:
                - vcluster-2
            weight: 1 

nginx-deployment

创建测试deployment

arduino 复制代码
kubectl apply -f nginx-deployment.yaml --kubeconfig /etc/karmada/karmada-apiserver.config
vbnet 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"propagationpolicy.karmada.io/name":"example-policy","propagationpolicy.karmada.io/namespace":"default"},"creationTimestamp":"2025-03-14T05:05:27Z","generation":2,"labels":{"app":"nginx","propagationpolicy.karmada.io/permanent-id":"30de7c21-df15-4d42-8d67-f0e04db251b0"},"name":"nginx","namespace":"default","resourceVersion":"3874","uid":"d8511775-9b09-48c8-9a51-3ccf60f39e19"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"observedGeneration":2,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
    propagationpolicy.karmada.io/name: example-policy
    propagationpolicy.karmada.io/namespace: default
  creationTimestamp: "2025-03-14T05:05:27Z"
  generation: 3
  labels:
    app: nginx
    propagationpolicy.karmada.io/permanent-id: 30de7c21-df15-4d42-8d67-f0e04db251b0
  name: nginx
  namespace: default
  resourceVersion: "7635"
  uid: d8511775-9b09-48c8-9a51-3ccf60f39e19
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

查看部署分布

可见nginx Deployment 共计2个副本,一个分配到了集群vcluster-kaka-k3s,一个副本被分配到了集群vcluster-kaka-k3s-1

相关推荐
Pitayafruit1 分钟前
【📕分布式锁通关指南 08】源码剖析redisson可重入锁之释放及阻塞与非阻塞获取
redis·分布式·后端
计算机软件程序设计6 分钟前
Django中的查询条件封装总结
后端·python·django
Pandaconda31 分钟前
【后端开发面试题】每日 3 题(十二)
数据库·后端·面试·负载均衡·高并发·后端开发·acid
谭知曦42 分钟前
Scheme语言的压力测试
开发语言·后端·golang
uhakadotcom1 小时前
实时计算Flink版:解锁数据处理新世界
后端·面试·github
uhakadotcom1 小时前
Hologres实时数仓引擎:简化数据处理与分析
后端·面试·github
带娃的IT创业者1 小时前
Flask应用调试模式下外网访问的技巧
后端·python·flask
阮清漪1 小时前
Bash语言的智能家居
开发语言·后端·golang
qq_447663051 小时前
深入理解静态与动态代理设计模式:从理论到实践
java·开发语言·后端·spring
浪裡遊1 小时前
Nginx快速上手
运维·前端·后端·nginx