云原生数据备份还原利器---【velero】

  1. velero背景:velero是基于go开发的一款用于云原生数据备份迁移的利器,主要作用于k8s,博主目前用到的,比如一个集群业务数据迁移到另一个集群,不用重新部署,历史业务数据还都在,涉及到pvc及sts这种数据需要根据实际情况处理下~

环境条件:

1.两个可以正常网络通信的k8s集群

2.存储,minio,必须同时用相同的存储

3.两边安装velero,

操作步骤

第一步 搭建minio

我也是直接在网上复制过来的,这个搭建不难,任何一种方式都可以

创建minio工作目录

bash 复制代码
 mkdir -p /home/application/minio

编写docker-compose.yaml 文件

 vim /home/application/minio/docker-compose.yaml

version: '3'
services:
  minio:
    image: minio/minio:latest
    container_name: minio
    networks:
      - srebro
    ports:
      - 9000:9000
      - 9001:9001
    volumes:
      - /home/application/minio/data:/data
      - /etc/localtime:/etc/localtime
    environment:
      MINIO_ROOT_USER: admin
      MINIO_ROOT_PASSWORD: 'admin123'
    command: server /data --console-address :9001
    restart: unless-stopped

networks:
  srebro:
    driver: bridge

运行minio,并创建bucket

bash 复制代码
docker-compose up -d

[root@localhost minio]# docker-compose ps
NAME                COMMAND                  SERVICE             STATUS              PORTS
minio               "/usr/bin/docker-ent..."   minio               runni

老版本minio 在web上没有新建bucket按钮的,可以通过mc客户端去创建

bash 复制代码
mc config host add minio的地址  AK  SK
然后
mc mb minio/test   ##通过这种方式创建

第二步 安装velero,并搭建velero备份服务

bash 复制代码
分别部署到 A 和 B 集群中
 wget https://github.com/vmware-tanzu/velero/releases/download/v1.14.0/velero-v1.14.0-linux-amd64.tar.gz

 tar -xf velero-v1.14.0-linux-amd64.tar.gz

 mv velero-v1.14.0-linux-amd64/velero /usr/local/bin

#查看velero版本
[root@openeuler ~]#  velero version
Client:
 Version: v1.14.0
 Git commit: 2fc6300f2239f250b40b0488c35feae59520f2d3
<error getting server version: namespaces "velero" not found>
创建velero创建远端存储认证文件
bash 复制代码
分别部署到 A 和 B 集群中
aws_access_key_id: minio用户名
aws_secret_access_key: minio密码
$ mkdir -p /home/velero/


 cat  > /home/velero/velero-auth.txt << EOF
[default]
aws_access_key_id = admin
aws_secret_access_key = admin123
EOF
部署服务端到A 集群

使用 --kubeconfig 选项指定部署到的集群;

使用 --namespace 指定部署到的名称空间;

使用 s3Url 指定备份使用的远端存储 Url,这里我指定的是 Minio 地址;

bash 复制代码
velero install \
  --kubeconfig /root/.kube/config \
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.5.5 \
  --bucket velero \
  --secret-file /home/application/velero/velero-auth.txt \
  --use-volume-snapshots=false \
  --namespace velero-system \
  --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio地址:9000
  

需要注意安装所需镜像有两个,默认拉取的是dockers.io的,国内拉不到,可自己翻墙或通过海外机器下载

部署服务端到B 集群
bash 复制代码
velero install \
  --kubeconfig /root/.kube/config \
  --provider aws \
  --plugins velero/velero-plugin-for-aws:v1.5.5 \
  --bucket velero \
  --secret-file /home/application/velero/velero-auth.txt \
  --use-volume-snapshots=false \
  --namespace velero-system \
  --backup-location-config region=minio,s3ForcePathStyle="true",s3Url=http://minio地址:9000
  
  
  
..........................以下为输出内容.........................................................
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource
CustomResourceDefinition/backuprepositories.velero.io: attempting to create resource client
CustomResourceDefinition/backuprepositories.velero.io: created
CustomResourceDefinition/backups.velero.io: attempting to create resource
CustomResourceDefinition/backups.velero.io: attempting to create resource client
CustomResourceDefinition/backups.velero.io: created
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource
CustomResourceDefinition/backupstoragelocations.velero.io: attempting to create resource client
CustomResourceDefinition/backupstoragelocations.velero.io: created
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource
CustomResourceDefinition/deletebackuprequests.velero.io: attempting to create resource client
CustomResourceDefinition/deletebackuprequests.velero.io: created
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource
CustomResourceDefinition/downloadrequests.velero.io: attempting to create resource client
CustomResourceDefinition/downloadrequests.velero.io: created
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource
CustomResourceDefinition/podvolumebackups.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumebackups.velero.io: created
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource
CustomResourceDefinition/podvolumerestores.velero.io: attempting to create resource client
CustomResourceDefinition/podvolumerestores.velero.io: created
CustomResourceDefinition/restores.velero.io: attempting to create resource
CustomResourceDefinition/restores.velero.io: attempting to create resource client
CustomResourceDefinition/restores.velero.io: created
CustomResourceDefinition/schedules.velero.io: attempting to create resource
CustomResourceDefinition/schedules.velero.io: attempting to create resource client
CustomResourceDefinition/schedules.velero.io: created
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource
CustomResourceDefinition/serverstatusrequests.velero.io: attempting to create resource client
CustomResourceDefinition/serverstatusrequests.velero.io: created
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource
CustomResourceDefinition/volumesnapshotlocations.velero.io: attempting to create resource client
CustomResourceDefinition/volumesnapshotlocations.velero.io: created
CustomResourceDefinition/datadownloads.velero.io: attempting to create resource
CustomResourceDefinition/datadownloads.velero.io: attempting to create resource client
CustomResourceDefinition/datadownloads.velero.io: created
CustomResourceDefinition/datauploads.velero.io: attempting to create resource
CustomResourceDefinition/datauploads.velero.io: attempting to create resource client
CustomResourceDefinition/datauploads.velero.io: created
Waiting for resources to be ready in cluster...
Namespace/velero-system: attempting to create resource
Namespace/velero-system: attempting to create resource client
Namespace/velero-system: created
ClusterRoleBinding/velero-velero-system: attempting to create resource
ClusterRoleBinding/velero-velero-system: attempting to create resource client
ClusterRoleBinding/velero-velero-system: created
ServiceAccount/velero: attempting to create resource
ServiceAccount/velero: attempting to create resource client
ServiceAccount/velero: created
Secret/cloud-credentials: attempting to create resource
Secret/cloud-credentials: attempting to create resource client
Secret/cloud-credentials: created
BackupStorageLocation/default: attempting to create resource
BackupStorageLocation/default: attempting to create resource client
BackupStorageLocation/default: created
Deployment/velero: attempting to create resource
Deployment/velero: attempting to create resource client
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero-system' to view the status.

在 A,b 集群确认 Velero 服务端已成功启动并就绪:

bash 复制代码
#A集群
[root@k8s-master ~]# kubectl get pods -n velero-system 
NAME                      READY   STATUS    RESTARTS   AGE
velero-6cc6986575-h6r2k   1/1     Running   0          2m
bash 复制代码
#B集群
[root@openeuler ~]# kubectl get pods -n velero-system 
NAME                      READY   STATUS    RESTARTS   AGE
velero-6cc6986575-hk6tc   1/1     Running   0          2m
先查看下A 集群中,想要备份命名空间下资源信息
bash 复制代码
kubectl get all -n default 

NAME                                READY   STATUS    RESTARTS   AGE
pod/pig-auth-66f5bcfd74-9qhlz       1/1     Running   0          4d
pod/pig-codegen-5865cd994b-g4rkd    1/1     Running   0          4d15h
pod/pig-gateway-7f754ffdbc-dhf72    1/1     Running   0          4d
pod/pig-monitor-5c5d67f57c-5gnwp    1/1     Running   0          4d15h
pod/pig-mysql-6c665c56c7-6jdq4      1/1     Running   0          4d15h
pod/pig-quartz-76fdbdf497-w9f6g     1/1     Running   0          4d15h
pod/pig-redis-554cfcc5cc-kfmv8      1/1     Running   0          4d15h
pod/pig-register-777df8f59b-lh7pt   1/1     Running   0          4d15h
pod/pig-ui-f48d64f76-wnpcx          1/1     Running   0          4d14h
pod/pig-upms-58d6f8448f-8njxd       1/1     Running   0          4d15h

NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
service/kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP                         5d2m
service/pig-auth       ClusterIP   10.96.16.113     <none>        3000/TCP                        4d15h
service/pig-codegen    ClusterIP   10.108.2.9       <none>        5002/TCP                        4d15h
service/pig-gateway    NodePort    10.110.236.0     <none>        9999:32750/TCP                  4d15h
service/pig-monitor    ClusterIP   10.106.84.163    <none>        5001/TCP                        4d15h
service/pig-mysql      NodePort    10.106.57.25     <none>        3306:30406/TCP                  4d15h
service/pig-quartz     ClusterIP   10.104.94.147    <none>        5007/TCP                        4d15h
service/pig-redis      ClusterIP   10.101.95.155    <none>        6379/TCP                        4d15h
service/pig-register   NodePort    10.108.162.125   <none>        8848:31458/TCP,9848:32186/TCP   4d15h
service/pig-ui         NodePort    10.97.53.70      <none>        80:32545/TCP                    4d14h
service/pig-upms       ClusterIP   10.100.129.94    <none>        4000/TCP                        4d15h

NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pig-auth       1/1     1            1           4d15h
deployment.apps/pig-codegen    1/1     1            1           4d15h
deployment.apps/pig-gateway    1/1     1            1           4d15h
deployment.apps/pig-monitor    1/1     1            1           4d15h
deployment.apps/pig-mysql      1/1     1            1           4d15h
deployment.apps/pig-quartz     1/1     1            1           4d15h
deployment.apps/pig-redis      1/1     1            1           4d15h
deployment.apps/pig-register   1/1     1            1           4d15h
deployment.apps/pig-ui         1/1     1            1           4d14h
deployment.apps/pig-upms       1/1     1            1           4d15h

NAME                                      DESIRED   CURRENT   READY   AGE
replicaset.apps/pig-auth-66f5bcfd74       1         1         1       4d15h
replicaset.apps/pig-codegen-5865cd994b    1         1         1       4d15h
replicaset.apps/pig-gateway-7f754ffdbc    1         1         1       4d15h
replicaset.apps/pig-monitor-5c5d67f57c    1         1         1       4d15h
replicaset.apps/pig-mysql-6c665c56c7      1         1         1       4d15h
replicaset.apps/pig-quartz-76fdbdf497     1         1         1       4d15h
replicaset.apps/pig-redis-554cfcc5cc      1         1         1       4d15h
replicaset.apps/pig-register-777df8f59b   1         1         1       4d15h
replicaset.apps/pig-ui-f48d64f76          1         1         1       4d14h
replicaset.apps/pig-upms-58d6f8448f       1         1         1       4d15h

以上放回信息直接拿的人家博客里的,实际工作信息没发,原理是一样的,供参考用的即可

使用 velero 二进制程序创建备份请求,通过 --namespace 指定 Velero 服务端所在名称空间,--include-namespaces 指定要备份的名称空间:

bash 复制代码
cat velero-bak.sh 
#!/bin/bash

DATE=`date +%Y%m%d%H%M%S`
velero backup create saas-backup${DATE} \
--include-cluster-resources=true \
--include-namespaces default \
--kubeconfig=/root/.kube/config \
--namespace velero-system
查看备份:

备份完后检查是否备份成功

方式一:

velero get backup -n velero-system

方式二:

通过登录minio集群中在velero桶空间下查看是否有备份成功的数据

bash 复制代码
$ kubectl get backups.velero.io -n velero-system 
NAME                     AGE
default-2025xxxxx   22s
查看备份日志:
 velero -n velero-system backup logs default-20240813102355
B 集群执行还原操作

先确认在 B 集群 确认可以看到刚刚的备份:

bash 复制代码
$ kubectl -n velero-system get backups.velero.io
NAME                     AGE
default-20240813102355   4m4s

使用 velero 二进制程序创建还原请求,通过 --namespace 指定 Velero 服务端所在名称空间,--from-backup 指定要还原的备份文件:

bash 复制代码
velero restore create \
  --namespace velero-system \
  --kubeconfig /root/.kube/config \
  --from-backup default-2025xxxxxxx --wait
  

备份时指定ns,则还原时也需要加上
-n velero-system

bash 复制代码
Restore request "default-2025xxxxxx" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
....
Restore completed with status: Completed. You may check for more information using the commands `velero restore describe default-20240813102355-20240813103010` and `velero restore logs default-20240813102355-20240813103010`.
查看还原日志:
$ velero -n velero-system restore logs default-20240813102355-20240813103010
查看还原后的资源:
$  kubectl get all -n default 
NAME                                READY   STATUS    RESTARTS   AGE
pod/pig-auth-66f5bcfd74-9qhlz       1/1     Running   0          20S
pod/pig-codegen-5865cd994b-g4rkd    1/1     Running   0          20S
pod/pig-gateway-7f754ffdbc-dhf72    1/1     Running   0          20S
pod/pig-monitor-5c5d67f57c-5gnwp    1/1     Running   0          20S
pod/pig-mysql-6c665c56c7-6jdq4      1/1     Running   0          20S
pod/pig-quartz-76fdbdf497-w9f6g     1/1     Running   0          20S
pod/pig-redis-554cfcc5cc-kfmv8      1/1     Running   0          20S
pod/pig-register-777df8f59b-lh7pt   1/1     Running   0          20S
pod/pig-ui-f48d64f76-wnpcx          1/1     Running   0          20S
pod/pig-upms-58d6f8448f-8njxd       1/1     Running   0          20S

数据的备份及还原就可以了

前面提到的涉及业务关联的数据,pv或者sts那种数据库的还要写额外处理

这就需要根据实际情况处理了

相关推荐
原神启动111 小时前
K8S(九)—— Kubernetes 集群调度全面解析
云原生·容器·kubernetes
百度Geek说13 小时前
百度流式计算开发平台的降本增效之路
运维·云原生
ICT董老师16 小时前
kubernetes中operator与helm有什么区别?部署mysql集群是选择operator部署还是helm chart部署?
linux·运维·mysql·云原生·容器·kubernetes
一个向上的运维者17 小时前
实战解析|EFK 日志系统数据同步问题全解(附核心配置模板)
elasticsearch·云原生
独自归家的兔18 小时前
解决k8s UI界面进不去
云原生·容器·kubernetes
孤岛悬城19 小时前
59 k8s集群调度
云原生·kubernetes
独自归家的兔19 小时前
K8s 核心概念深度解析:Pod 是什么?
云原生·容器·kubernetes
智能化咨询20 小时前
(122页PPT)数字化架构演进和治理(附下载方式)
微服务·云原生·架构
释怀不想释怀20 小时前
Zabbix(安装模式)
运维·云原生·zabbix
Cyber4K2 天前
【Kubernetes专项】K8s集群1.31版本安装手册
linux·docker·云原生·容器·kubernetes