karmada和vcluster联合作战 [1]

vcluster

安装 vcluster命令

bash 复制代码
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.24.0-rc.3/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster

vcluster --version

初始化虚拟集群

vcluster create kaka-k3 -f vcluster.yaml

cat vcluster.yaml

yaml 复制代码
sync:
  toHost:
    persistentVolumes: #同步pv到host集群
      enabled: true
    persistentVolumeClaims:#同步pvc到host集群
      enabled: true  
  fromHost:
    storageClasses: #同步host集群的SC
      enabled: true
controlPlane:
  distro:
    k3s: #使用k3s作为虚拟集群的控制面
      enabled: true

使用相同的命令创建另外一个集群。

集群列表

我们将会使用kaka-k3kaka-k3s-1 模拟 karmada管理的多集群member.

这里也可以使用以下命令来相互切换vcluster的context

arduino 复制代码
vcluster ls
vcluster connect kaka-k3s
vcluster disconnect

karmada

安装

这里我们选择 karmadactl 命令行初始化karmada集群

先下载karmadactl

bash 复制代码
wget https://github.com/karmada-io/karmada/releases/download/v1.13.0/karmadactl-linux-arm64.tgz
tar xvf karmadactl-linux-arm64.tgz
chmod +x karmadactl
mv karmadactl /usr/local/bin/
karmadactl --version

karmadactl init --config karmada-init.yaml

yaml 复制代码
apiVersion: config.karmada.io/v1alpha1
kind: KarmadaInitConfig
spec:
  karmadaCrds: "https://github.com/karmada-io/karmada/releases/download/v1.10.3/crds.tar.gz"
  etcd:
    local:
      imageRepository: "registry.k8s.io/etcd"
      imageTag: "3.5.13-0"
      initImage:
        imageRepository: "docker.io/library/alpine"
        imageTag: "3.19.1"
  components:
    karmadaAPIServer:
      imageRepository: "registry.k8s.io/kube-apiserver"
      imageTag: "v1.30.0"
    karmadaAggregatedAPIServer:
      imageRepository: "docker.io/karmada/karmada-aggregated-apiserver"
      imageTag: "v1.10.3"
    kubeControllerManager:
      imageRepository: "registry.k8s.io/kube-controller-manager"
      imageTag: "v1.30.0"
    karmadaControllerManager:
      imageRepository: "docker.io/karmada/karmada-controller-manager"
      imageTag: "v1.10.3"
    karmadaScheduler:
      imageRepository: "docker.io/karmada/karmada-scheduler"
      imageTag: "v1.10.3"
    karmadaWebhook:
      imageRepository: "docker.io/karmada/karmada-webhook"
      imageTag: "v1.10.3"

安装过程:

可见karmada创建了自己的api-server,scheduler,controller-manager和etcd等。

集群加入

加入模式有push模式,pull模式,因为我们使用的集群是vcluster的虚拟集群,只能使用pull模式加入,需要在虚拟集群启动agent。

shell 复制代码
#切换到虚拟集群视角
vcluster connect kaka-k3

# 创建token
karmadactl token create --print-register-command --kubeconfig /etc/karmada/karmada-apiserver.config

# 注册到karmada
karmadactl register 192.168.64.5:32443 --token 6mmyse.v6upph4subbqmsva --discovery-token-ca-cert-hash sha256:c9f6b3ebd53b786231a1c8d37744e6749696619b65e60d71762446deb8f6de0b --cluster-name=vcluster-1

相同的步骤对kaka-k3-1也执行一遍。之后在karmada视角下,看到新加入后的2个集群:vcluster-1,vcluster-2

映射关系如下:

vcluster name karmada name on host
kaka-k3 vcluster-1 kubernetes-admin@kubernetes
kaka-k3-1 vcluster-2 kubernetes-admin@kubernetes

propagationpolicy

创建传播策略:

arduino 复制代码
kubectl apply -f propagationpolicy.yaml --kubeconfig /etc/karmada/karmada-apiserver.config

cat propagationpolicy.yaml

yaml 复制代码
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: example-policy # The default namespace is `default`.
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx # If no namespace is specified, the namespace is inherited from the parent object scope.
  placement:
    clusterAffinity:
      clusterNames:
        - vcluster-1
        - vcluster-2
    replicaScheduling:
      replicaSchedulingType: Divided  # 启用副本分割策略
      replicaDivisionPreference: Weighted  # 按权重分配
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - vcluster-1
            weight: 1
          - targetCluster:
              clusterNames:
                - vcluster-2
            weight: 1 

nginx-deployment

创建测试deployment

arduino 复制代码
kubectl apply -f nginx-deployment.yaml --kubeconfig /etc/karmada/karmada-apiserver.config
vbnet 复制代码
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{"propagationpolicy.karmada.io/name":"example-policy","propagationpolicy.karmada.io/namespace":"default"},"creationTimestamp":"2025-03-14T05:05:27Z","generation":2,"labels":{"app":"nginx","propagationpolicy.karmada.io/permanent-id":"30de7c21-df15-4d42-8d67-f0e04db251b0"},"name":"nginx","namespace":"default","resourceVersion":"3874","uid":"d8511775-9b09-48c8-9a51-3ccf60f39e19"},"spec":{"progressDeadlineSeconds":600,"replicas":2,"revisionHistoryLimit":10,"selector":{"matchLabels":{"app":"nginx"}},"strategy":{"rollingUpdate":{"maxSurge":"25%","maxUnavailable":"25%"},"type":"RollingUpdate"},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"nginx","imagePullPolicy":"Always","name":"nginx","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30}}},"status":{"availableReplicas":1,"observedGeneration":2,"readyReplicas":1,"replicas":1,"updatedReplicas":1}}
    propagationpolicy.karmada.io/name: example-policy
    propagationpolicy.karmada.io/namespace: default
  creationTimestamp: "2025-03-14T05:05:27Z"
  generation: 3
  labels:
    app: nginx
    propagationpolicy.karmada.io/permanent-id: 30de7c21-df15-4d42-8d67-f0e04db251b0
  name: nginx
  namespace: default
  resourceVersion: "7635"
  uid: d8511775-9b09-48c8-9a51-3ccf60f39e19
spec:
  progressDeadlineSeconds: 600
  replicas: 2
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        imagePullPolicy: Always
        name: nginx
        resources: {}
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30

查看部署分布

可见nginx Deployment 共计2个副本,一个分配到了集群vcluster-kaka-k3s,一个副本被分配到了集群vcluster-kaka-k3s-1

相关推荐
.生产的驴3 小时前
SpringBoot 集成滑块验证码AJ-Captcha行为验证码 Redis分布式 接口限流 防爬虫
java·spring boot·redis·分布式·后端·爬虫·tomcat
野犬寒鸦4 小时前
MySQL索引使用规则详解:从设计到优化的完整指南
java·数据库·后端·sql·mysql
思考的橙子5 小时前
Springboot之会话技术
java·spring boot·后端
兆。7 小时前
电子商城后台管理平台-Flask Vue项目开发
前端·vue.js·后端·python·flask
weixin_437398217 小时前
RabbitMQ深入学习
java·分布式·后端·spring·spring cloud·微服务·rabbitmq
西京刀客12 小时前
Go多服务项目结构优化:为何每个服务单独设置internal目录?
开发语言·后端·golang
李匠202413 小时前
C++GO语言微服务之gorm框架操作MySQL
开发语言·c++·后端·golang
源码云商13 小时前
基于Spring Boot + Vue的高校心理教育辅导系统
java·spring boot·后端
黄俊懿14 小时前
【深入理解SpringCloud微服务】手写实现一个微服务分布式事务组件
java·分布式·后端·spring·spring cloud·微服务·架构师
Themberfue15 小时前
RabbitMQ ②-工作模式
开发语言·分布式·后端·rabbitmq