5.k8s的deploy-ds-nfs-loadbalancer

Kubernetes存储卷进阶与控制器详解

存储卷-nfs

nfs的应用场景

  • 1. 实现跨节点不同Pod的数据共享
  • 2. 实现跨节点存储数据

部署nfs-server

2.1 K8S集群所有节点安装nfs驱动
bash 复制代码
apt -y install nfs-kernel-server
2.2 创建服务端的共享目录
bash 复制代码
[root@master231 pods]# mkdir -pv /yinzhengjie/data/nfs-server
2.3 配置nfs-server端
bash 复制代码
[root@master231 pods]# tail -1 /etc/exports
/yinzhengjie/data/nfs-server *(rw,no_root_squash)
2.4 重启配置生效
bash 复制代码
[root@master231 ~]# systemctl enable --now nfs-server

[root@master231 ~]# systemctl restart nfs-server

[root@master231 ~]# exportfs
/yinzhengjie/data/nfs-server
        <world>

k8s使用nfs存储卷案例

3.1 编写资源清单
yaml 复制代码
[root@master231 volumes]# cat 06-pods-nfs.yaml
apiVersion: v1
kind: Pod
metadata:
  name: xiuxian-apps-v1
  labels:
    apps: xiuxian
spec:
  volumes:
  - name: data
    # 指定存储卷类型是nfs
    nfs:
      # NFS服务器地址
      server: 10.0.0.231
      # nfs的共享路径
      path: /yinzhengjie/data/nfs-server
  nodeName: worker232

  containers:
  - name: c1
    image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v1

    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html/

---

apiVersion: v1
kind: Pod
metadata:
  name: xiuxian-apps-v2
  labels:
    apps: xiuxian
spec:
  volumes:
  - name: data
    nfs:
      server: 10.0.0.231
      path: /yinzhengjie/data/nfs-server
  nodeName: worker233 # 强制pod必须在worker233运行
  containers:
  - name: c1
    image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2

    volumeMounts:
    - name: data
      mountPath: /usr/share/nginx/html/

此时容器c1和c2的 /usr/share/nginx/html/ 与master231节点的 /yinzhengjie/data/nfs-server 会共享目录

wordpress持久化案例

需求

  • 使用k8s部署wordpress,要求如下:
  • mysql部署到worker233节点;
  • wordpress部署到worker232节点,且windows可以正常访问wordpress;
  • 测试验证,删除MySQL和WordPress容器后数据不丢失,实现秒级恢复。使用nfs效果最佳

实施步骤

1. 创建数据目录
bash 复制代码
[root@master231 case-demo]# mkdir -pv /yinzhengjie/data/nfs-server/casedemo/wordpress/{wp,db}
mkdir: created directory '/yinzhengjie/data/nfs-server/casedemo'
mkdir: created directory '/yinzhengjie/data/nfs-server/casedemo/wordpress'
mkdir: created directory '/yinzhengjie/data/nfs-server/casedemo/wordpress/wp'
mkdir: created directory '/yinzhengjie/data/nfs-server/casedemo/wordpress/db'
2. 编写资源清单
yaml 复制代码
[root@master231 case-demo]# cat 02-po-svc-volumes-wordpress.yaml
apiVersion: v1
kind: Pod
metadata:
  name: db
  namespace: default
  labels:
    app: db
spec:
  volumes:
  - name: data
    nfs:
      server: 10.0.0.231
      path: /yinzhengjie/data/nfs-server/casedemo/wordpress/db

  # 定死在232进行db服务,使用宿主机网络
  nodeName: worker232
  hostNetwork: true
  containers:
  - name: db
    image: harbor250.oldboyedu.com/oldboyedu-db/mysql:8.0.36-oracle
    volumeMounts:
    - name: data
      mountPath: /var/lib/mysql
    ports:
    - containerPort: 3306
      name: mysql-server
    args:
    - --character-set-server=utf8
    - --collation-server=utf8_bin
    - --default-authentication-plugin=mysql_native_password
    env:
    - name: MYSQL_ALLOW_EMPTY_PASSWORD
      value: "yes"
    - name: MYSQL_DATABASE
      value: "wordpress"
    - name: MYSQL_USER
      value: linux99
    - name: MYSQL_PASSWORD
      value: oldboyedu

---

apiVersion: v1
kind: Pod
metadata:
  name: wp
  namespace: default
  labels:
    app: wp
spec:
  volumes:
  - name: data
    nfs:
      server: 10.0.0.231
      path: /yinzhengjie/data/nfs-server/casedemo/wordpress/wp
  nodeName: worker233

  containers:
  - name: wp
    image: harbor250.oldboyedu.com/oldboyedu-wp/wordpress:6.7.1-php8.1-apache

    volumeMounts:
    - name: data
      mountPath: /var/www/html

    ports:
    - containerPort: 80
      name: web
    env:
    - name: WORDPRESS_DB_HOST
      value: "10.0.0.232"
    - name: WORDPRESS_DB_NAME
      value: "wordpress"
    - name: WORDPRESS_DB_USER
      value: linux99
    - name: WORDPRESS_DB_PASSWORD
      value: oldboyedu

---

apiVersion: v1
kind: Service
metadata:
  name: svc-wp
  namespace: default
spec:
  type: NodePort
  selector:
    app: wp
  ports:
  - port: 80
    targetPort: web
    nodePort: 30080

rs控制器概念(了解)

和rc类似,rs也是用来控制Pod副本数量。rs全称为"replicasets"。相对于rc而言,rs实现更加轻量级且功能更强大。deployment底层基于rs实现的。

deploy控制器(deploy与rc,rs语法格式相似)

deploy概念

deploy的全称为"deployments",该控制器不直接操作pod,而是底层调用rs控制器间接控制Pod。deploy相比于rs可以实现声明式更新。意思就是apply一下就可以自动更新了,不需要重新删除再apply,rc和rs必须手动删除才会自动生效。

实战案例

2.1 编写资源清单:相对于rs就是换个字段的事情,所以再也不需要使用rs,rc
yaml 复制代码
[root@master231 deployments]# cat 01-deploy-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian
spec:
  replicas: 3
  selector:
    matchLabels:
      app: xiuxian

  template:
    metadata:
      labels:
        app: xiuxian
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2

ds控制器(deploy和ds的语法格式基本相似)

ds控制器概念

ds的全称为"daemonsets",该控制器可以控制每个worker有且仅有一个Pod。主要应用场景就是需要在每个客户端部署一个应用实例的情况,比如zabbix-agent,node-exporter,kube-proxy,...

实战案例

yaml 复制代码
[root@master231 daemonsets]# cat 01-ds-xiuxian.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ds-xiuxian
  namespace: kube-public
spec:
  selector:
    matchLabels:
      app: xiuxian
  template:
    metadata:
      labels:
        app: xiuxian
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
bash 复制代码
[root@master231 daemonsets]# kubectl apply -f 01-ds-xiuxian.yaml
daemonset.apps/ds-xiuxian created

[root@master231 daemonsets]# kubectl get ds,pods -o wide -n kube-public
NAME                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE     SELECTOR   AGE   CONTAINERS      IMAGES                                            SELECTOR
daemonset.apps/ds-xiuxian   2                   2         2                      2            2             <none>          17s                        c1           registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2   app=xiuxian

NAME                   READY   STATUS    RESTARTS   AGE   IP                    NODE        NOMINATED NODE   READINESS GATES
pod/ds-xiuxian-6d9v8   1/1     Running   0          17s   10.100.2.121   worker233    <none>           <none>
pod/ds-xiuxian-mtnrf   1/1     Running   0          17s   10.100.1.63      worker232   <none>           <none>

修改kube-proxy的代理模式为ipvs:生产环境一定要优化

1. 修改kube-proxy配置

bash 复制代码
[root@master231 ~]# kubectl get configmap kube-proxy -n kube-system -o yaml | \
sed -e "s/strictARP: false/strictARP: true/" | \
sed -e 's#mode: ""#mode: "ipvs"#' | \
kubectl apply -f - -n kube-system

2. 删除pod使得配置生效

bash 复制代码
[root@master231 ~]# kubectl get pods -A -l k8s-app=kube-proxy -o wide
[root@master231 ~]# kubectl -n kube-system delete pods -l k8s-app=kube-proxy
[root@master231 ~]# kubectl -n kube-system get pods -l k8s-app=kube-proxy -o wide
[root@master231 ~]# kubectl -n kube-system logs -f kube-proxy-g5sfd
I0715 08:54:25.173976       1 node.go:163] Successfully retrieved node IP: 10.0.0.233
I0715 08:54:25.174278       1 server_others.go:138] "Detected node IP" address="10.0.0.233"
I0715 08:54:25.194880       1 server_others.go:269] "Using ipvs Proxier"   # 很明显,此处的Service其使用的代理模式为ipvs。
I0715 08:54:25.194912       1 server_others.go:271] "Creating dualStackProxier for ipvs"

3. 验证ipvs的实现逻辑【可读性较好,且性能更强。】

bash 复制代码
[root@master231 ~]# apt -y install ipvsadm
[root@master231 ~]# kubectl get svc -A
[root@master231 ~]# kubectl -n kube-system describe svc kube-dns

service---部署metallb并使用loadbalancer

2. 部署Metallb

2.1 配置kube-proxy代理模式为ipvs(已做)
2.2 K8S集群所有节点导入镜像
bash 复制代码
wget http://192.168.16.253/Resources/Kubernetes/Add-ons/metallb/v0.15.2/oldboyedu-metallb-controller-v0.15.2.tar.gz
wget http://192.168.16.253/Resources/Kubernetes/Add-ons/metallb/v0.15.2/oldboyedu-metallb-speaker-v0.15.2.tar.gz
for i in `ls -1 oldboyedu-metallb-*`;do docker load -i $i;done
2.3 下载metallb组件的资源清单
bash 复制代码
[root@master231 metallb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
2.4 部署Metallb
bash 复制代码
[root@master231 metallb]# kubectl apply -f metallb-native.yaml
[root@master231 metallb]# kubectl get deploy,rs,ds,pods -n metallb-system -o wide
2.5 创建存储池
bash 复制代码
[root@master231 metallb]# cat > metallb-ip-pool.yaml <<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: jasonyin2020
  namespace: metallb-system
spec:
  addresses:
  # 注意改为你自己为MetalLB分配的IP地址,改地址,建议设置为你windows能够访问的网段。【建议设置你的虚拟机Vmnet8网段】
  - 10.0.0.150-10.0.0.180
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: yinzhengjie
  namespace: metallb-system
spec:
  ipAddressPools:
  - jasonyin2020a
EOF

[root@master231 metallb]# kubectl apply -f metallb-ip-pool.yaml

[root@master231 metallb]# kubectl get ipaddresspools.metallb.io -A
NAMESPACE        NAME           AUTO ASSIGN   AVOID BUGGY IPS   ADDRESSES
metallb-system   jasonyin2020   true          false             ["10.0.0.150-10.0.0.180"]
2.6 创建LoadBalancer的Service测试验证
yaml 复制代码
[root@master231 metallb]# cat deploy-svc-xiuxian.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: deploy-xiuxian
spec:
  replicas: 3
  selector:
    matchLabels:
      apps: xiuxian
  template:
    metadata:
      labels:
        apps: xiuxian
        version: v1
    spec:
      containers:
      - name: c1
        image: registry.cn-hangzhou.aliyuncs.com/yinzhengjie-k8s/apps:v2
        ports:
        - containerPort: 80
          name: web
---
apiVersion: v1
kind: Service
metadata:
  name: svc-xiuxian
spec:
  type: LoadBalancer
  selector:
    apps: xiuxian
  ports:
  - port: 80
# 此时会自动生成nodePort和从地址池中给一个地址
bash 复制代码
[root@master231 metallb]# kubectl apply -f deploy-svc-xiuxian.yaml

[root@master231 metallb]# kubectl get deploy,rs,po,svc -o wide --show-labels
NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR       LABELS
service/kubernetes    ClusterIP      10.200.0.1       <none>        443/TCP        5d4h   <none>         component=apiserver,provider=kubernetes
service/svc-xiuxian   LoadBalancer   10.200.154.190   10.0.0.150    80:30848/TCP   3s     apps=xiuxian   <none>

[root@master231 metallb]# curl 10.0.0.150

总结:loadBalancer可以通过内部、外部和地址池(metalLB)进行访问,可以用于负载均衡。

修改Service的NodePort端口范围

1. 默认情况下svc的NodePort端口范围是30000-32767

2. 修改api-server的配置文件

bash 复制代码
[root@master231 ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml  # 改这个文件
apiVersion: v1
kind: Pod
metadata:
  ...
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --service-node-port-range=3000-50000
    ...

2. 让kubelet热加载静态Pod目录文件

bash 复制代码
[root@master231 ~]# mv /etc/kubernetes/manifests/kube-apiserver.yaml /opt/
[root@master231 ~]# mv /opt/kube-apiserver.yaml /etc/kubernetes/manifests/

3. 创建测试案例

yaml 复制代码
[root@master231 services]# cat 03-svc-LoadBalancer.yaml
apiVersion: v1
kind: Service
metadata:
  name: svc-xiuxian
spec:
  type: LoadBalancer
  selector:
    apps: xiuxian
  ports:
  - port: 80
    nodePort: 9090
bash 复制代码
[root@master231 services]# kubectl apply -f 03-svc-LoadBalancer.yaml

[root@master231 services]# kubectl get svc
NAME          TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)       AGE
kubernetes    ClusterIP      10.200.0.1       <none>        443/TCP       5d5h
svc-xiuxian   LoadBalancer   10.200.171.112   10.0.0.150    80:9090/TCP   6s

命令行

bash 复制代码
# 修改pod的副本数量
kubectl scale deployment xixi --replicas=1

# 查看deployment
kubectl get deploy -o wide

# 查看demonset
kubectl get ds -o wide
相关推荐
RK_Dangerous2 小时前
第一次使用Docker(Ubuntu)
ubuntu·docker·容器
DeeplyMind2 小时前
第24章 Docker资源管理
运维·docker·容器
@hdd11 小时前
工作节点组件详解:kubelet、kube-proxy 与容器运行时
容器·kubernetes
@hdd11 小时前
Kubernetes 网络模型:Pod 通信、Service 网络与 CNI
网络·云原生·容器·kubernetes
2401_8480097212 小时前
Docker学习后续
docker·云原生·eureka
封奚泽优12 小时前
Docker常用命令(Windows 11)
运维·docker·容器
was17213 小时前
你的私有知识库:自托管 Markdown 笔记方案 NoteDiscovery
笔记·云原生·自部署
only_Klein16 小时前
kubernetes-ReplicaSet控制器
容器·kubernetes
only_Klein18 小时前
Kubernetes-DaemonSet控制器
容器·kubernetes