K8S实现数据持久化存储

创建ConfigMap的方式key/value:命令行、yaml文件、指定文件创建、通过指定文件创建、作为卷挂载到pod

1、通过命令行参数创建

kubectl create configmap test1 --from-literal=hostname=10.244.255.254 --from-literal=port=80

kubectl get configmap

kubectl get configmap test1 -o yaml

生成测试的POD

vim nginx-deploy-cmd.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: nginx-cmd-test

labels:

app: nginx-cmd-test

spec:

replicas: 1

selector:

matchLabels:

app: nginx-cmd-test

template:

metadata:

labels:

app: nginx-cmd-test

spec:

containers:

  • name: nginx

image: nginx

ports:

  • containerPort: 80

env:

  • name: nginx_port

valueFrom:

configMapKeyRef:

name: test1

key: port

  • name: nginx_hostname

valueFrom:

configMapKeyRef:

name: test1

key: hostname

kubectl create -f nginx-deploy-cmd.yaml

2、编写yaml文件创建

vim configmap-test2.yaml

apiVersion: v1

kind: ConfigMap

metadata:

name: test2

data:

mysql_host: "10.244.255.253"

mysql_port: "3306"

mysql_user: "root"

kubectl create -f configmap-test2.yaml

kubectl get configmap

kubectl get configmap test2 -o yaml

3、通过指定文件创建

vim /etc/mongodb.conf

bind_ip=0.0.0.0

dbpath=/data/mongodb/db

logpath=/data/mongodb/log/mongodb.log

port=27017

logappend=true

fork=true

noauth=true

kubectl create configmap test3 --from-file=/etc/mongodb.conf

kubectl get configmap

kubectl get configmap test3 -o yaml

可通过如下命令修改key的值

kubectl create configmap --from-file==

4、通过指定目录创建

mkdir configfile

cd configfile

vim a.conf

name = zhangsan

age = 18

cellphone = 18100001111

vim b.conf

name = lisi

age = 25

cellphone = 18922221111

kubectl create configmap test4 --from-file=configfile

kubectl get configmap

kubectl get configmap test4 -o yaml

二、创建好configMap之后,使用configmap

1、通过环境变量传递

vim appvar.yaml //定义 ConfigMap 文件

apiVersion: v1

kind: ConfigMap

metadata:

name: appvar

data:

loglevel: warning

logdir: /var/log/nginx

vim app-test.yaml //定义 Pod 文件

apiVersion: v1

kind: Pod

metadata:

name: app-test

spec:

containers:

  • name: app-test
    image: nginx:latest
    env:
    • name: APPLOG
      valueFrom:
      configMapKeyRef:
      name: appvar
      key: loglevel
    • name: APPDIR
      valueFrom:
      configMapKeyRef:
      name: appvar
      key: logdir

kubectl create -f appvar.yaml

kubectl create -f app-test.yaml

kubectl get pods

kubectl exec -it app-test -- bash

echo $APPLOG

echo $APPDIR

新字段envFrom

vim app-test-1.yaml

apiVersion: v1

kind: Pod

metadata:

name: app-test-1

spec:

containers:

  • name: app-test-1
    image: nginx:latest
    envFrom:
    • configMapRef:
      name: appvar

kubectl create -f app-test-1.yaml

kubectl get pods

kubectl exec -it app-test-1 -- bash

echo $loglevel

echo $logdir

2、作为卷挂载到Pod

vim app-config-files.yaml

apiVersion: v1

kind: ConfigMap

metadata:

name: config-conf-files

data:

aconf:

"name = zhangsan\n

age = 18\n

cellphone = 18100001111\n"

bconf:

"name = lisi\n

age = 25\n

cellphone = 18922221111\n"

vim app-config-pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: app-config-pod

spec:

containers:

  • name: app-config-pod
    image: nginx:latest
    volumeMounts:
    • name: abconf //引用卷的名称
      mountPath: /data //挂载到 Pod 上的目录
      volumes:
  • name: abconf //定义卷的名称
    configMap:
    name: config-conf-files //引用 ConfigMap
    items:
    • key: aconf
      path: a.conf //将 a.conf 文件名进行挂载
    • key: bconf
      path: b.conf //将 b.conf 文件名进行挂载

kubectl create -f app-config-files.yaml

kubectl create -f app-config-pod.yaml

kubectl exec -it app-config-pod -- bash

cd /data

cat a.conf

cat b.conf

更新

vim appvar01.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: app-test

spec:

replicas: 2

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

containers:

  • name: app-test
    image: nginx:latest
    env:
    • name: APPLOG
      valueFrom:
      configMapKeyRef:
      name: appvar
      key: loglevel
    • name: APPDIR
      valueFrom:
      configMapKeyRef:
      name: appvar
      key: logdir

vim appvar.yaml //定义 ConfigMap 文件

apiVersion: v1

kind: ConfigMap

metadata:

name: appvar

data:

loglevel: warning

logdir: /var/log/nginx

kubectl create -f appvar01.yaml

三、数据持久化

1、emptydir

最基础的卷类型就是emptyDir,顾名思义,一个emptyDir 卷是宿主机上的一个空目录,它是在Pod 分配到Node 时创建的,无需指定宿主机上对应的目录文件,并且是自动分配的。emptyDir 的生命周期跟Pod 相同,当Pod 从Node 节点上移除时,emptyDir卷中的数据也将被永久删除。

2、hostPath

hostPath 存储卷通常用于在Pod 上挂载宿主机上的文件或目录。在使用hostPath类型的存储卷时,也可以设置type 字段,类型有多种,包括:文件、目录、Socket 等。实际上hostPath 的功能就是docker 中的目录映射,只不过在K8S 中,Pod 会漂移,当Pod 漂移到其他node 节点上时,Pod 不会跨节点去读取原来的目录,此时它的数据就无法保证了。

3、其他类型卷

1)GcePersistentDisk:使用谷歌公有云提供的永久磁盘(Persistent Disk,简称PD)来存放卷中的数据。PD 上的内容会被永久保存,当Pod 被删除时,PD只是被卸载,需要先创建一个PD,才能使用gcePersistentDisk。

2)AwsElasticBlockStore:与GCE 类似,该类型的卷使用亚马逊公有云提供的EBS Volume 存储数据, 需要先创建EBS Volume , 才能使用awsElasticBlockStore。

3)NFS 卷:使用NFS 网络文件系统提供的共享目录存储数据。

4)Iscsi:使用iSCSI 存储设备上的目录挂载到Pod 中。

5)Glusterfs:使用开源的GlusterFS 网络文件系统的目录挂载到Pod 中。

四、部署NFS:

1)基于NFS

K8S-master节点执行

yum -y install nfs-utils rpcbind

mkdir /nfsdata

vim /etc/exports

/nfsdata *(rw,sync,no_root_squash)

systemctl start rpcbind && systemctl enable rpcbind

systemctl start nfs-server && systemctl enable nfs-server

showmount -e

K8s-node1/k8s-node2节点执行

yum -y install nfs-utils rpcbind

showmount -e 192.168.180.210

创建PV(k8s-master/nfs服务器节点)

mkdir /nfsdata/pv

创建PV的yaml文件

vim pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: pv1

spec:

capacity:

storage: 1Gi

accessModes:

  • ReadWriteOnce
    persistentVolumeReclaimPolicy: Recycle
    storageClassName: nfs
    nfs:
    path: /nfsdata/pv
    server: 192.168.180.210

kubectl create -f pv.yaml

kubectl get pv

创建PVC

vim pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc1

spec:

accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 200Mi
    storageClassName: nfs

kubectl create -f pvc.yaml

kubectl get pvc -o wide

kubectl get pv,pvc

vim pod.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: pod1

labels:

app: pod1

spec:

replicas: 1

selector:

matchLabels:

app: pod1

template:

metadata:

labels:

app: pod1

spec:

containers:

  • name: pod1
    image: nginx:latest
    volumeMounts:
    • name: mydata
      mountPath: /usr/share/nginx/html
      volumes:

      • name: mydata
        persistentVolumeClaim:
        claimName: pvc1

apiVersion: v1

kind: Service

metadata:

name: svc-pod1

labels:

app: svc-pod1

spec:

type: NodePort

ports:

  • name: http
    protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30080
    selector:
    app: pod1

kubectl create -f pod.yaml

kubectl get pods,svc

cd /nfsdata/pv

ls

vim index.html

kubectl get pods

kubectl exec -it pod/pod1-97d8949d8-58n6p -- bash

cat /usr/share/nginx/html/index.html

2)emptyDir

vim emptydir.yaml

apiVersion: v1

kind: Pod

metadata:

name: test-pod-emptydir

spec:

containers:

  • image: nginx
    name: emptydir-container
    volumeMounts:
    • mountPath: /usr/share/nginx/html
      name: htdocs-volume
      volumes:
  • name: htdocs-volume
    emptyDir: {}

kubectl create -f emptydir.yaml

查看Pod分配在那个节点上

kubectl get pod -o wide

查看Pod上的容器ID

kubectl describe pod test-pod-emptydir

到对应的节点查看emptydir的挂载目录

docker inspect 6dc61858aa5f

省略部分内容...

"Mounts": [

{

"Type": "bind",

"Source": "/var/lib/kubelet/pods/fca55719-e8ab-4a45-b7c7-7a58947780d3/volumes/kubernetes.io~empty-dir/htdocs-volume",

"Destination": "/usr/share/nginx/html",

"Mode": "",

"RW": true,

"Propagation": "rprivate"

},

3)hostPath

vim hostpath.yaml

apiVersion: v1

kind: Pod

metadata:

name: test-pod-hostpath

spec:

containers:

  • image: nginx
    name: hostpath-container
    volumeMounts:
    • mountPath: /usr/share/nginx/html
      name: htdocs-volume
      volumes:
  • name: htdocs-volume
    hostPath:
    path: /data

kubectl create -f hostpath.yaml

查看Pod分配到那个节点

kubectl get pods -o wide

查看当前的情况

curl 10.244.169.134

在相对应的节点执行

kubectl exec -it test-pod-hostpath -- bash

echo "test page " > /usr/share/nginx/html/index.html

curl 10.244.169.134

基于PV 和 PVC 部署MySQL服务

1、创建PV及PVC

cd /nfsdata

mkdir mysql-pv

mkdir /mysql

cd /mysql

vim mysql-pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: mysql-pv

spec:

capacity:

storage: 1Gi

accessModes:

  • ReadWriteOnce
    persistentVolumeReclaimPolicy: Retain
    storageClassName: nfs
    nfs:
    path: /nfsdata/mysql-pv
    server: 192.168.180.210

kubectl create -f mysql-pv.yaml

kubectl get pv

vim mysql-pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: mysql-pvc

spec:

accessModes:

  • ReadWriteOnce
    resources:
    requests:
    storage: 1Gi
    storageClassName: nfs

kubectl create -f mysql-pvc.yaml

kubectl get pvc,pv

vim mysql.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

name: mysql

spec:

replicas: 1

selector:

matchLabels:

app: mysql

template:

metadata:

labels:

app: mysql

spec:

containers:

  • name: mysql
    image: mysql:5.7
    env:
    • name: MYSQL_ROOT_PASSWORD
      value: 123456.com
      volumeMounts:
    • name: mysql-storage
      mountPath: /var/lib/mysql
      volumes:
  • name: mysql-storage
    persistentVolumeClaim:
    claimName: mysql-pvc

apiVersion: v1

kind: Service

metadata:

name: mysql

spec:

type: NodePort

ports:

  • name: http
    protocol: TCP
    port: 3306
    targetPort: 31306
    selector:
    app: mysql

kubectl create -f mysql.yaml

kubectl get pods,svc -o wide

kubectl exec -it pod/mysql-88558bbdf-hltxh -- bash

mysql -uroot -p123456.com

show databases;

create database test;

use test;

create table my_id(id int(4));

insert my_id value(99);

select * from my_id;

模拟MYSQL故障

kubectl get pods -o wide

关闭对应节点,使POD在另一个节点服务器上重新生成

相关推荐
chuanauc7 小时前
Kubernets K8s 学习
java·学习·kubernetes
小张是铁粉8 小时前
docker学习二天之镜像操作与容器操作
学习·docker·容器
烟雨书信8 小时前
Docker文件操作、数据卷、挂载
运维·docker·容器
IT成长日记8 小时前
【Docker基础】Docker数据卷管理:docker volume prune及其参数详解
运维·docker·容器·volume·prune
这儿有一堆花8 小时前
Docker编译环境搭建与开发实战指南
运维·docker·容器
LuckyLay8 小时前
Compose 高级用法详解——AI教你学Docker
运维·docker·容器
Uluoyu8 小时前
redisSearch docker安装
运维·redis·docker·容器
IT成长日记12 小时前
【Docker基础】Docker数据持久化与卷(Volume)介绍
运维·docker·容器·数据持久化·volume·
疯子的模样17 小时前
Docker 安装 Neo4j 保姆级教程
docker·容器·neo4j
虚伪的空想家17 小时前
rook-ceph配置dashboard代理无法访问
ceph·云原生·k8s·存储·rook