k8s 集群部署
k8s 环境部署说明
主机名 | ip | 角色 |
---|---|---|
harbor.rin.org | 172.25.254.200 | harbor仓库 |
k8s-master.rin.org | 172.25.254.100 | master,k8s集群控制节点 |
k8s-node1.rin.org | 172.25.254.10 | worker,k8s集群工作节点 |
k8s-node2.rin.org | 172.25.254.20 | worker,k8s集群工作节点 |
配置harbor仓库
安装docker:

bash
dnf install *.rpm -y --allowerasing
编辑docker文件配置:
bash
vim /lib/systemd/system/docker.service
添加参数:
bash
--iptables=true

关闭防火墙:
bash
systemctl stop firewalld.service
启动docker:
bash
systemctl start docker
解压harbor包

bash
tar xzf harbor-offline-installer-v2.5.4.tgz
创建证书存放目录:
bash
mkdir -p /data/certs
生成证书:
bash
openssl req -newkey rsa:4096 \
-nodes -sha256 -keyout /data/certs/rin.key \
-addext "subjectAltName = DNS:www.rin.com" \
-x509 -days 365 -out /data/certs/rin.crt
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:guangxi
Locality Name (eg, city) [Default City]:nanning
Organization Name (eg, company) [Default Company Ltd]:k8s
Organizational Unit Name (eg, section) []:harbor
Common Name (eg, your name or your server's hostname) []:www.rin.com
Email Address []:rin@rin.com
编辑harbor仓库配置文件:
bash
cp harbor.yml.tmpl harbor.yml
vim harbor.yml
bash
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: www.rin.com
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /data/certs/rin.crt
private_key: /data/certs/rin.key
# The initial password of Harbor admin
# It only works in first time to install harbor
# Remember Change the admin password from UI after launching Harbor.
harbor_admin_password: rin
# The default data volume
data_volume: /data
执行安装脚本:
bash
./install.sh --with-chartmuseum

通过网页访问harbor仓库:
新建项目目录:
所有禁用swap
对master和node进行配置:
bash
systemctl mask swap.target
bash
vim /etc/fstab

重新加载配置:
bash
systemctl daemon-reload
重启后再次检测:
没有显示则表示关闭成功
bash
swapon -s
配置其余节点(master、node)
关闭防火墙:
bash
systemctl stop firewalld.service
在200上将安装包传到各主机:

bash
for i in 10 20 100;
do scp -r /mnt/packages/rpm/docker root@172.25.254.$i:/mnt/;
done
安装docker,对于冲突插件允许系统进行卸载:
bash
dnf install *.rpm -y --allowerasing
编辑docker文件配置:
bash
vim /lib/systemd/system/docker.service
添加参数:
bash
--iptables=true

配置完后,进行批量覆盖:
bash
for i in 10 20 100;
do scp -r /lib/systemd/system/docker.service root@172.25.254.$i:/lib/systemd/system/docker.service;
done
配置全证书:
在前面配置完证书的harbor节点进行证书分发:
bash
for i in 10 20 100;
do ssh root@172.25.254.$i "mkdir -p /etc/docker/certs.d/www.rin.com";
scp /data/certs/rin.crt root@172.25.254.$i:/etc/docker/certs.d/www.rin.com/ca.crt;
done
设置docker默认库
bash
vim /etc/docker/daemon.json
bash
{
"registry-mirrors": ["https://www.rin.com"]
}
将配置好的默认库文件覆盖传送到个节点:
bash
for i in 10 20 100;
do scp -r /etc/docker/daemon.json root@172.25.254.$i:/etc/docker/daemon.json;
done
启动各节点的docker:
bash
for i in 10 20 100;
do ssh root@172.25.254.$i "systemctl start docker";
done
查看各节点的默认库路径是否配置成功:
bash
docker info

配置解析:
bash
vim /etc/hosts
bash
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100 master.rin.com
172.25.254.200 harbor.rin.com
172.25.254.10 node1.rin.com
172.25.254.20 node2.rin.com
将解析分发覆盖各节点:
bash
for i in 10 20 100;
do scp -r /etc/hosts root@172.25.254.$i:/etc/hosts;
done
登录harbor仓库:
bash
docker login www.rin.com

安装K8S部署工具
在所节点安装cri-docker

将tools目录传到各节点:(master、node)
bash
for i in 10 20 100;
do scp -r /mnt/k8s-tools root@172.25.254.$i:/mnt;
done
安装里面的两个rpm包:
bash
dnf install *.rpm -y
编辑cri配置文件:
bash
vim /lib/systemd/system/cri-docker.service
bash
--network-plugin=cni --pod-infra-container-image=www.rin.com/k8s/pause:3.9

将配置好的文件,分发覆盖给个节点:
bash
for i in 10 20 100;
do scp -r /lib/systemd/system/cri-docker.service
root@172.25.254.$i:/lib/systemd/system/cri-docker.service;
done
启动cri插件:
bash
systemctl start docker
bash
systemctl enable --now cri-docker.service
bash
systemctl start cri-docker.socket
systemctl start cri-docker
systemctl status cri-docker
解压k8s-1.30.tar.gz包:
安装依赖:
bash
dnf install libnetfilter_conntrack -y
接下来,安装解压好的所有rpm包
cpp
[root@master k8s-1.30]# ls
k8s-1.30.tar.gz
[root@master k8s-1.30]# tar -xzf k8s-1.30.tar.gz
[root@master k8s-1.30]# ls
conntrack-tools-1.4.7-2.el9.x86_64.rpm kubernetes-cni-1.4.0-150500.1.1.x86_64.rpm
cri-dockerd-0.3.14-3.el8.x86_64.rpm libcgroup-0.41-19.el8.x86_64.rpm
cri-tools-1.30.1-150500.1.1.x86_64.rpm libnetfilter_conntrack-1.0.9-1.el9.x86_64.rpm
k8s-1.30.tar.gz libnetfilter_cthelper-1.0.0-22.el9.x86_64.rpm
kubeadm-1.30.0-150500.1.1.x86_64.rpm libnetfilter_cttimeout-1.0.0-19.el9.x86_64.rpm
kubectl-1.30.0-150500.1.1.x86_64.rpm libnetfilter_queue-1.0.5-1.el9.x86_64.rpm
kubelet-1.30.0-150500.1.1.x86_64.rpm socat-1.7.4.1-5.el9.x86_64.rpm
将这个包目录传给其他节点:
bash
for i in 10 20 100;
do scp -r /mnt/k8s-tools root@172.25.254.$i:/mnt/;
done
安装目录内的所有rpm包:
bash
dnf install *.rpm -y
设置kubectl命令补齐功能
bash
dnf install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
导入镜像:
bash
docker load -i k8s_docker_images-1.30.tar
给镜像打标签:
bash
docker images | awk '/google/{ print $1":"$2}' \
| awk -F "/" '{system("docker tag "$0" www.rin.com/k8s/"$3)}'
推送镜像:
bash
docker images | awk '/k8s/{system("docker push "$1":"$2)}'
集群初始化
#启动kubelet服务
bash
systemctl enable --now kubelet.service
初始化:
bash
kubeadm init
--pod-network-cidr=10.244.0.0/16
--image-repository www.rin.com/k8s
--kubernetes-version v1.30.0
--cri-socket=unix:///var/run/cri-dockerd.sock
成功如下显示:
cpp
[root@master mnt]# kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository www.rin.com/k8s --kubernetes-version v1.30.0 --cri-socket=unix:///var/run/cri-dockerd.sock
[init] Using Kubernetes version: v1.30.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master.rin.com] and IPs [10.96.0.1 172.25.254.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master.rin.com] and IPs [172.25.254.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master.rin.com] and IPs [172.25.254.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
[kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.50323907s
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 7.007327396s
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master.rin.com as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master.rin.com as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: a4l2fc.ahmfiubi738p3iup
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.25.254.100:6443 --token a4l2fc.ahmfiubi738p3iup \
--discovery-token-ca-cert-hash sha256:b99fd92564901bbe4c29b376a8616224ebc05279bf4212f775aaa985592cf169
#指定集群配置文件变量
bash
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
bash
export KUBECONFIG=/etc/kubernetes/admin.conf
echo $KUBECONFIG
kubectl get nodes

bash
kubectl get pod -A

若初始化失败,清楚残余数据重新进行:
bash
[root@master mnt]# rm -rf /etc/cni/net.d
[root@master mnt]# rm -rf /var/lib/etcd/*
[root@master mnt]# systemctl restart docker cri-docker
[root@master mnt]# kubeadm reset -f --cri-socket=unix:///var/run/cri-dockerd.sock
[root@master mnt]# lsof -i :6443 | grep -v "PID" | awk '{print $2}' | xargs -r kill -9
[root@master mnt]# lsof -i :10259 | grep -v "PID" | awk '{print $2}' | xargs -r kill -9
[root@master mnt]# lsof -i :10257 | grep -v "PID" | awk '{print $2}' | xargs -r kill -9
[root@master mnt]# lsof -i :10250 | grep -v "PID" | awk '{print $2}' | xargs -r kill -9
[root@master mnt]# lsof -i :2379 | grep -v "PID" | awk '{print $2}' | xargs -r kill -9
[root@master mnt]# lsof -i :2380 | grep -v "PID" | awk '{print $2}' | xargs -r kill -9
[root@master mnt]# rm -rf /etc/kubernetes/manifests/*
[root@master mnt]# rm -rf /etc/kubernetes/pki/*
[root@master mnt]# rm -rf /var/lib/etcd/*
[root@master mnt]# rm -rf /var/lib/kubelet/*
[root@master mnt]# rm -rf /var/lib/kube-proxy/*
[root@master mnt]# rm -rf /var/run/kubernetes/*
[root@master mnt]# systemctl restart kubelet
[root@master mnt]# systemctl restart docker cri-docker
重新初始化:
bash
kubeadm init
--pod-network-cidr=10.244.0.0/16
--image-repository www.rin.com/k8s
--kubernetes-version v1.30.0
--cri-socket=unix:///var/run/cri-dockerd.sock
安装flannel网络插件
#下载flannel的yaml部署文件
bash
wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
挂载镜像:
bash
docker load -i flannel-0.25.5.tag.gz
cpp
[root@master k8s-tools]# ls
cri-dockerd-0.3.14-3.el8.x86_64.rpm k8s k8s_docker_images-1.30.tar libcgroup-0.41-19.el8.x86_64.rpm
flannel-0.25.5.tag.gz k8s-1.30 kube-flannel.yml
[root@master k8s-tools]# docker load -i flannel-0.25.5.tag.gz
ef7a14b43c43: Loading layer [==================================================>] 8.079MB/8.079MB
1d9375ff0a15: Loading layer [==================================================>] 9.222MB/9.222MB
4af63c5dc42d: Loading layer [==================================================>] 16.61MB/16.61MB
2b1d26302574: Loading layer [==================================================>] 1.544MB/1.544MB
d3dd49a2e686: Loading layer [==================================================>] 42.11MB/42.11MB
7278dc615b95: Loading layer [==================================================>] 5.632kB/5.632kB
c09744fc6e92: Loading layer [==================================================>] 6.144kB/6.144kB
0a2b46a5555f: Loading layer [==================================================>] 1.923MB/1.923MB
5f70bf18a086: Loading layer [==================================================>] 1.024kB/1.024kB
601effcb7aab: Loading layer [==================================================>] 1.928MB/1.928MB
Loaded image: flannel/flannel:v0.25.5
21692b7dc30c: Loading layer [==================================================>] 2.634MB/2.634MB
Loaded image: flannel/flannel-cni-plugin:v1.5.1-flannel1
创建项目仓库:

#上传镜像到仓库
bash
docker tag flannel/flannel:v0.25.5 www.rin.com/flannel/flannel:v0.25.5
docker push www.rin.com/flannel/flannel:v0.25.5
docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 www.rin.com/flannel/flannel-cni-plugin:v1.5.1-flannel1
docker push www.rin.com/flannel/flannel-cni-plugin:v1.5.1-flannel1
过程如下:
cpp
[root@master k8s-tools]# docker tag flannel/flannel:v0.25.5 www.rin.com/flannel/flannel:v0.25.5
[root@master k8s-tools]# docker push www.rin.com/flannel/flannel:v0.25.5
The push refers to repository [www.rin.com/flannel/flannel]
601effcb7aab: Pushed
5f70bf18a086: Pushed
0a2b46a5555f: Pushed
c09744fc6e92: Pushed
7278dc615b95: Pushed
d3dd49a2e686: Pushed
2b1d26302574: Pushed
4af63c5dc42d: Pushed
1d9375ff0a15: Pushed
ef7a14b43c43: Pushed
v0.25.5: digest: sha256:89be0f0c323da5f3b9804301c384678b2bf5b1aca27e74813bfe0b5f5005caa7 size: 2414
[root@master k8s-tools]# docker tag flannel/flannel-cni-plugin:v1.5.1-flannel1 www.rin.com/flannel/flannel-cni-plugin:v1.5.1-flannel1
[root@master k8s-tools]# docker push www.rin.com/flannel/flannel-cni-plugin:v1.5.1-flannel1
The push refers to repository [www.rin.com/flannel/flannel-cni-plugin]
21692b7dc30c: Pushed
ef7a14b43c43: Mounted from flannel/flannel
v1.5.1-flannel1: digest: sha256:c026a9ad3956cb1f98fe453f26860b1c3a7200969269ad47a9b5996f25ab0a18 size: 738
#编辑kube-flannel.yml 修改镜像下载位置
bash
vim kube-flannel.yml
需要修改以下几行
cpp
[root@master k8s-tools]# grep -n image kube-flannel.yml
146: image: www.rin.com/flannel/flannel:v0.25.5
173: image: www.rin.com/flannel/flannel-cni-plugin:v1.5.1-flannel1
184: image: www.rin.com/flannel/flannel:v0.25.5
#安装flannel网络插件
bash
kubectl apply -f kube-flannel.yml
cpp
[root@master k8s-tools]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
serviceaccount/flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
节点扩容:
在所有的worker节点中
1 确认部署好以下内容
2 禁用swap
3 安装:
kubelet-1.30.0
kubeadm-1.30.0
kubectl-1.30.0
docker-ce
cri-dockerd
4 修改cri-dockerd启动文件添加
--network-plugin=cni
--pod-infra-container-image=reg.timinglee.org/k8s/pause:3.9
5 启动服务
kubelet.service
cri-docker.service
以上信息确认完毕后即可加入集群
bash
kubeadm join 172.25.254.100:6443 --token a4l2fc.ahmfiubi738p3iup --discovery-token-ca-cert-hash sha256:b99fd92564901bbe4c29b376a8616224ebc05279bf4212f775aaa985592cf169 --cri-socket=unix:///var/run/cri-dockerd.sock
测试集群运行情况:
bash
kubectl get nodes
cpp
[root@master k8s-tools]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master.rin.com Ready control-plane 41m v1.30.0
node1.rin.com Ready <none> 3m7s v1.30.0
node2.rin.com Ready <none> 2m27s v1.30.0

挂载镜像:
bash
docker load -i busyboxplus.tar.gz
docker load -i myapp.tar.gz
打标签:
bash
docker tag timinglee/myapp:v1 www.rin.com/library/myapp:v1
docker tag timinglee/myapp:v2 www.rin.com/library/myapp:v2
docker tag busyboxplus:latest www.rin.com/library/busyboxplus:latest
登录仓库:
bash
docker login www.rin.com
推送镜像到仓库:
bash
docker push www.rin.com/library/busyboxplus:latest
docker push www.rin.com/library/myapp:v1
docker push www.rin.com/library/myapp:v2
应用版本的更新
#利用控制器建立pod
创建了一个名为 test的 Deployment,该 Deployment 会管理 2 个基于 myapp:v1
镜像的 Pod 副本。
bash
kubectl create deployment test1 --image myapp:v1 --replicas 2
暴露端口:
bash
kubectl expose deployment test1 --port 80 --target-port 80
cpp
[root@master docker-images]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h12m
test1 ClusterIP 10.102.5.196 <none> 80/TCP 13s
访问服务:
cpp
[root@master docker-images]# curl 10.102.5.196
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
#查看历史版本:
cpp
[root@master docker-images]# kubectl rollout history deployment test1
deployment.apps/test1
REVISION CHANGE-CAUSE
1 <none>
#更新控制器镜像版本
bash
kubectl set image deployment/test1 myapp=myapp:v2
查看历史版本:
cpp
[root@master docker-images]# kubectl rollout history deployment test1
deployment.apps/test1
REVISION CHANGE-CAUSE
1 <none>
2 <none>
访问内容测试:
cpp
[root@master docker-images]# curl 10.102.5.196
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
版本回滚:
cpp
[root@master docker-images]# kubectl rollout undo deployment test1 --to-revision 1
deployment.apps/test1 rolled back
[root@master docker-images]# curl 10.102.5.196
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
利用yaml文件部署应用
用yaml文件部署应用有以下优点
声明式配置:
-
清晰表达期望状态:以声明式的方式描述应用的部署需求,包括副本数量、容器配置、网络设置等。这使得配置易于理解和维护,并且可以方便地查看应用的预期状态。
-
可重复性和版本控制:配置文件可以被版本控制,确保在不同环境中的部署一致性。可以轻松回滚到以前的版本或在不同环境中重复使用相同的配置。
-
团队协作:便于团队成员之间共享和协作,大家可以对配置文件进行审查和修改,提高部署的可靠性和稳定性。
灵活性和可扩展性:
-
丰富的配置选项:可以通过 YAML 文件详细地配置各种 Kubernetes 资源,如 Deployment、Service、ConfigMap、Secret 等。可以根据应用的特定需求进行高度定制化。
-
组合和扩展:可以将多个资源的配置组合在一个或多个 YAML 文件中,实现复杂的应用部署架构。同时,可以轻松地添加新的资源或修改现有资源以满足不断变化的需求。
与工具集成:
-
与 CI/CD 流程集成:可以将 YAML 配置文件与持续集成和持续部署(CI/CD)工具集成,实现自动化的应用部署。例如,可以在代码提交后自动触发部署流程,使用配置文件来部署应用到不同的环境。
-
命令行工具支持:Kubernetes 的命令行工具
kubectl
对 YAML 配置文件有很好的支持,可以方便地应用、更新和删除配置。同时,还可以使用其他工具来验证和分析 YAML 配置文件,确保其正确性和安全性。
资源清单参数
参数名称 | 类型 | 参数说明 |
---|---|---|
version | String | 这里是指的是K8S API的版本,目前基本上是v1,可以用kubectl api-versions命令查询 |
kind | String | 这里指的是yaml文件定义的资源类型和角色,比如:Pod |
metadata | Object | 元数据对象,固定值就写metadata |
metadata.name | String | 元数据对象的名字,这里由我们编写,比如命名Pod的名字 |
metadata.namespace | String | 元数据对象的命名空间,由我们自身定义 |
Spec | Object | 详细定义对象,固定值就写Spec |
spec.containers[] | list | 这里是Spec对象的容器列表定义,是个列表 |
spec.containers[].name | String | 这里定义容器的名字 |
spec.containers[].image | string | 这里定义要用到的镜像名称 |
spec.containers[].imagePullPolicy | String | 定义镜像拉取策略,有三个值可选: (1) Always: 每次都尝试重新拉取镜像 (2) IfNotPresent:如果本地有镜像就使用本地镜像 (3) )Never:表示仅使用本地镜像 |
spec.containers[].command[] | list | 指定容器运行时启动的命令,若未指定则运行容器打包时指定的命令 |
spec.containers[].args[] | list | 指定容器运行参数,可以指定多个 |
spec.containers[].workingDir | String | 指定容器工作目录 |
spec.containers[].volumeMounts[] | list | 指定容器内部的存储卷配置 |
spec.containers[].volumeMounts[].name | String | 指定可以被容器挂载的存储卷的名称 |
spec.containers[].volumeMounts[].mountPath | String | 指定可以被容器挂载的存储卷的路径 |
spec.containers[].volumeMounts[].readOnly | String | 设置存储卷路径的读写模式,ture或false,默认为读写模式 |
spec.containers[].ports[] | list | 指定容器需要用到的端口列表 |
spec.containers[].ports[].name | String | 指定端口名称 |
spec.containers[].ports[].containerPort | String | 指定容器需要监听的端口号 |
spec.containers[] ports[].hostPort | String | 指定容器所在主机需要监听的端口号,默认跟上面containerPort相同,注意设置了hostPort同一台主机无法启动该容器的相同副本(因为主机的端口号不能相同,这样会冲突) |
spec.containers[].ports[].protocol | String | 指定端口协议,支持TCP和UDP,默认值为 TCP |
spec.containers[].env[] | list | 指定容器运行前需设置的环境变量列表 |
spec.containers[].env[].name | String | 指定环境变量名称 |
spec.containers[].env[].value | String | 指定环境变量值 |
spec.containers[].resources | Object | 指定资源限制和资源请求的值(这里开始就是设置容器的资源上限) |
spec.containers[].resources.limits | Object | 指定设置容器运行时资源的运行上限 |
spec.containers[].resources.limits.cpu | String | 指定CPU的限制,单位为核心数,1=1000m |
spec.containers[].resources.limits.memory | String | 指定MEM内存的限制,单位为MIB、GiB |
spec.containers[].resources.requests | Object | 指定容器启动和调度时的限制设置 |
spec.containers[].resources.requests.cpu | String | CPU请求,单位为core数,容器启动时初始化可用数量 |
spec.containers[].resources.requests.memory | String | 内存请求,单位为MIB、GIB,容器启动的初始化可用数量 |
spec.restartPolicy | string | 定义Pod的重启策略,默认值为Always. (1)Always: Pod-旦终止运行,无论容器是如何 终止的,kubelet服务都将重启它 (2)OnFailure: 只有Pod以非零退出码终止时,kubelet才会重启该容器。如果容器正常结束(退出码为0),则kubelet将不会重启它 (3) Never: Pod终止后,kubelet将退出码报告给Master,不会重启该 |
spec.nodeSelector | Object | 定义Node的Label过滤标签,以key:value格式指定 |
spec.imagePullSecrets | Object | 定义pull镜像时使用secret名称,以name:secretkey格式指定 |
spec.hostNetwork | Boolean | 定义是否使用主机网络模式,默认值为false。设置true表示使用宿主机网络,不使用docker网桥,同时设置了true将无法在同一台宿主机 上启动第二个副本 |
示例6:资源限制
inti容器示例:
cpp
[root@master ~]# cat pod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
name: initpod
name: initpod
spec:
containers:
- image: myapp:v1
name: myapp
initContainers:
- name: init-myservice
image: busybox
command: ["sh","-c","until test -e /testfile;do echo wating for myservice; sleep 2;done"]
[root@master ~]# kubectl apply -f pod.yml
pod/initpod unchanged
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 0/1 Init:0/1 0 12m
test1-78bbb8f59d-bt5q2 1/1 Running 0 54m
test1-78bbb8f59d-k79wj 1/1 Running 0 54m
[root@master ~]# kubectl logs pods/initpod init-myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
wating for myservice
[root@master ~]# kubectl exec pods/initpod -c init-myservice -- /bin/sh -c "touch /testfile"
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
initpod 1/1 Running 0 12m
test1-78bbb8f59d-bt5q2 1/1 Running 0 54m
test1-78bbb8f59d-k79wj 1/1 Running 0 54m
[root@master ~]#
控制器
replicaset 控制器
replicaset功能
-
ReplicaSet 是下一代的 Replication Controller,官方推荐使用ReplicaSet
-
ReplicaSet和Replication Controller的唯一区别是选择器的支持,ReplicaSet支持新的基于集合的选择器需求
-
ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行
-
虽然 ReplicaSets 可以独立使用,但今天它主要被Deployments 用作协调 Pod 创建、删除和更新的机制
replicaset参数说明
参数名称 | 字段类型 | 参数说明 |
---|---|---|
spec | Object | 详细定义对象,固定值就写Spec |
spec.replicas | integer | 指定维护pod数量 |
spec.selector | Object | Selector是对pod的标签查询,与pod数量匹配 |
spec.selector.matchLabels | string | 指定Selector查询标签的名称和值,以key:value方式指定 |
spec.template | Object | 指定对pod的描述信息,比如lab标签,运行容器的信息等 |
spec.template.metadata | Object | 指定pod属性 |
spec.template.metadata.labels | string | 指定pod标签 |
spec.template.spec | Object | 详细定义对象 |
spec.template.spec.containers | list | Spec对象的容器列表定义 |
spec.template.spec.containers.name | string | 指定容器名称 |
spec.template.spec.containers.image | string | 指定容器镜像 |
replicaset 示例::
cpp
#生成yml文件
[root@master ~]# kubectl create deployment replicaset --image myapp:v1 --dry-run=client -o yaml > replicaset.yml
#编辑配置文件
[root@master ~]# cat replicaset.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: replicaset
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
#创建了一个名为 replicaset 的 ReplicaSet 资源
[root@master ~]# kubectl apply -f replicaset.yml
replicaset.apps/replicaset created
#查看pod列表
[root@master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-gq8q7 1/1 Running 0 7s app=myapp
replicaset-zpln9 1/1 Running 0 7s app=myapp
#将其中一个 replicaset的标签名字覆盖为rin
[root@master ~]# kubectl label pod replicaset-gq8q7 app=rin --overwrite
pod/replicaset-gq8q7 labeled
#查看pod列表,检测到名字不同,会自动生成一个新的 replicaset,标签为app=myapp
[root@master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-gq8q7 1/1 Running 0 73s app=rin
replicaset-zdhwr 1/1 Running 0 1s app=myapp
replicaset-zpln9 1/1 Running 0 73s app=myapp
#将刚刚改为rin的标签名字更改回来
[root@master ~]# kubectl label pod replicaset-gq8q7 app=myapp --overwrite
pod/replicaset-gq8q7 labeled
#发现会自动将多的pod删除
[root@master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-gq8q7 1/1 Running 0 109s app=myapp
replicaset-zpln9 1/1 Running 0 109s app=myapp
#随机一个pod删除
[root@master ~]# kubectl delete pods replicaset-gq8q7
pod "replicaset-gq8q7" deleted
#系统会自动补充pod
[root@master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
replicaset-bwcqc 1/1 Running 0 6s app=myapp
replicaset-zpln9 1/1 Running 0 2m30s app=myapp
#回收资源
[root@master ~]# kubectl delete -f replicaset.yml
replicaset.apps "replicaset" deleted
deployment 控制器

deployment控制器的功能
-
为了更好的解决服务编排的问题,kubernetes在V1.2版本开始,引入了Deployment控制器。
-
Deployment控制器并不直接管理pod,而是通过管理ReplicaSet来间接管理Pod
-
Deployment管理ReplicaSet,ReplicaSet管理Pod
-
Deployment 为 Pod 和 ReplicaSet 提供了一个申明式的定义方法
-
在Deployment中ReplicaSet相当于一个版本
典型的应用场景:
-
用来创建Pod和ReplicaSet
-
滚动更新和回滚
-
扩容和缩容
-
暂停与恢复
deployment控制器示例
cpp
[root@master ~]# kubectl create deployment deployment --image myapp:v1 --dry-run=client -o yaml > deployment.yml
[root@master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
[root@master ~]# kubectl apply -f deployment.yml
deployment.apps/deployment created
[root@master ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
deployment-5d886954d4-f5r2c 1/1 Running 0 5s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-gnjh4 1/1 Running 0 5s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-lxfdv 1/1 Running 0 5s app=myapp,pod-template-hash=5d886954d4
deployment-5d886954d4-vbngd 1/1 Running 0 5s app=myapp,pod-template-hash=5d886954d4
版本迭代
cpp
[root@master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE ES
deployment-5d886954d4-f5r2c 1/1 Running 0 102s
deployment-5d886954d4-gnjh4 1/1 Running 0 102s
deployment-5d886954d4-lxfdv 1/1 Running 0 102s
deployment-5d886954d4-vbngd 1/1 Running 0 102s
#pod运行容器版本为v1
[root@master ~]# curl 10.244.1.8
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a
[root@master ~]# kubectl describe deployments.apps deployment
Name: deployment
Namespace: default
CreationTimestamp: Tue, 12 Aug 2025 23:35:58 -0400
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=myapp
Replicas: 4 desired | 4 updated | 4 total | 4 ava
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=myapp
Containers:
myapp:
Image: myapp:v1
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Node-Selectors: <none>
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: deployment-5d886954d4 (4/4 replicas created)
Events:
Type Reason Age From Mess
---- ------ ---- ---- ----
Normal ScalingReplicaSet 2m44s deployment-controller Scal
#更新容器运行版本
[root@master ~]# vim deployment.yml
[root@master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5 #最小就绪时间5秒
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v2
name: myapp
[root@master ~]# kubectl apply -f deployment.yml
#更新过程
bash
watch -n5 kubectl get pods -o wide



cpp
[root@master ~]# curl 10.244.1.18
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
版本回滚
cpp
[root@master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 4
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1 #回滚到之前版本
name: myapp
[root@master ~]# kubectl apply -f deployment.yml
#测试回滚效果
[root@master ~]# curl 10.244.1.8
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
滚动更新策略
cpp
[root@master ~]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
minReadySeconds: 5
replicas: 4
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: myapp:v1
name: myapp
[root@master ~]# kubectl apply -f deployment.yml
暂停及恢复
daemonset控制器

daemonset功能
DaemonSet 确保全部(或者某些)节点上运行一个 Pod 的副本。当有节点加入集群时, 也会为他们新增一个 Pod ,当有节点从集群移除时,这些 Pod 也会被回收。删除 DaemonSet 将会删除它创建的所有 Pod
DaemonSet 的典型用法:
-
在每个节点上运行集群存储 DaemonSet,例如 glusterd、ceph。
-
在每个节点上运行日志收集 DaemonSet,例如 fluentd、logstash。
-
在每个节点上运行监控 DaemonSet,例如 Prometheus Node Exporter、zabbix agent等
-
一个简单的用法是在所有的节点上都启动一个 DaemonSet,将被作为每种类型的 daemon 使用
-
一个稍微复杂的用法是单独对每种 daemon 类型使用多个 DaemonSet,但具有不同的标志, 并且对不同硬件类型具有不同的内存、CPU 要求
daemonset 示例
job 控制器

job控制器功能
Job,主要用于负责批量处理(一次要处理指定数量任务)短暂的一次性(每个任务仅运行一次就结束)任务
Job特点如下:
-
当Job创建的pod执行成功结束时,Job将记录成功结束的pod数量
-
当成功结束的pod达到指定的数量时,Job将完成执行
job 控制器示例
cpp
[root@master ~]# cat job.yml
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
completions: 6
parallelism: 2
template:
spec:
containers:
- name: pi
image: perl:5.34.0
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
[root@master ~]# kubectl apply -f job.yml
job.batch/pi created
# Job 会按照并行度 2 逐步创建 6 个 Pod,
# 每个 Pod 完成 π 的计算后退出,全部完成后 Job 状态变为 Complete
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-5d886954d4-4tdr8 1/1 Running 0 36m
deployment-5d886954d4-5ds78 1/1 Running 0 36m
deployment-5d886954d4-hxmh8 1/1 Running 0 36m
deployment-5d886954d4-jhfkq 1/1 Running 0 36m
pi-rmnhh 0/1 ContainerCreating 0 15s
pi-tdpcq 0/1 ContainerCreating 0 15s
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-5d886954d4-4tdr8 1/1 Running 0 38m
deployment-5d886954d4-5ds78 1/1 Running 0 38m
deployment-5d886954d4-hxmh8 1/1 Running 0 37m
deployment-5d886954d4-jhfkq 1/1 Running 0 38m
pi-rmnhh 1/1 Running 0 93s
pi-tdpcq 1/1 Running 0 93s
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
deployment-5d886954d4-4tdr8 1/1 Running 0 40m
deployment-5d886954d4-5ds78 1/1 Running 0 40m
deployment-5d886954d4-hxmh8 1/1 Running 0 39m
deployment-5d886954d4-jhfkq 1/1 Running 0 40m
pi-jf6jt 0/1 Completed 0 110s
pi-jqmwr 0/1 Completed 0 2m
pi-llvsk 0/1 Completed 0 116s
pi-pfswl 0/1 Completed 0 106s
pi-rmnhh 0/1 Completed 0 3m37s
pi-tdpcq 0/1 Completed 0 3m37s
!NOTE
关于重启策略设置的说明:
如果指定为OnFailure,则job会在pod出现故障时重启容器
而不是创建pod,failed次数不变
如果指定为Never,则job会在pod出现故障时创建新的pod
并且故障pod不会消失,也不会重启,failed次数加1
如果指定为Always的话,就意味着一直重启,意味着job任务会重复去执行了
cronjob 控制器
cronjob 控制器功能
-
Cron Job 创建基于时间调度的 Jobs。
-
CronJob控制器以Job控制器资源为其管控对象,并借助它管理pod资源对象,
-
CronJob可以以类似于Linux操作系统的周期性任务作业计划的方式控制其运行时间点及重复运行的方式。
-
CronJob可以在特定的时间点(反复的)去运行job任务。
cronjob 控制器 示例
cpp
#每分钟执行一次任务,输出当前时间和一句问候信息
[root@master ~]# cat cronjob.yml
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
[root@master ~]# kubectl apply -f cronjob.yml
cronjob.batch/hello created
[root@master ~]# kubectl get jobs --watch
NAME STATUS COMPLETIONS DURATION AGE
hello-29251145 Complete 1/1 5s 77s
hello-29251146 Complete 1/1 4s 17s
hello-29251147 Running 0/1 0s
hello-29251147 Running 0/1 0s 0s
hello-29251147 Running 0/1 4s 4s
hello-29251147 Complete 1/1 4s 4s
hello-29251148 Running 0/1 0s
hello-29251148 Running 0/1 0s 0s
hello-29251148 Running 0/1 4s 4s
hello-29251148 Complete 1/1 4s 4s
hello-29251145 Complete 1/1 5s 3m4s
#可以查看日志内容
[root@master ~]# kubectl logs hello-2925114
hello-29251145-6nl57 hello-29251147-4lp5b
hello-29251146-7rjkq
[root@master ~]# kubectl logs hello-29251145-6nl57
Wed Aug 13 07:05:02 UTC 2025
Hello from the Kubernetes cluster
[root@master ~]# kubectl logs hello-29251147-4lp5b
Wed Aug 13 07:07:01 UTC 2025
Hello from the Kubernetes cluster
#清理资源
[root@master ~]# kubectl delete -f cronjob.yml
cronjob.batch "hello" deleted
[root@master ~]# kubectl get cronjobs
No resources found in default namespace.
微服务
微服务的类型
微服务类型 | 作用描述 |
---|---|
ClusterIP | 默认值,k8s系统给service自动分配的虚拟IP,只能在集群内部访问 |
NodePort | 将Service通过指定的Node上的端口暴露给外部,访问任意一个NodeIP:nodePort都将路由到ClusterIP |
LoadBalancer | 在NodePort的基础上,借助cloud provider创建一个外部的负载均衡器,并将请求转发到 NodeIP:NodePort,此模式只能在云服务器上使用 |
ExternalName | 将服务通过 DNS CNAME 记录方式转发到指定的域名(通过 spec.externlName 设定) |
cpp
[root@master ~]# cat rin.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rin
name: rin
spec:
replicas: 2
selector:
matchLabels:
app: rin
template:
metadata:
creationTimestamp: null
labels:
app: rin
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: rin
name: rin
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: rin
[root@master ~]# kubectl apply -f rin.yaml
deployment.apps/rin created
#微服务默认使用iptables调度
[root@master ~]# kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d6h <none>
rin ClusterIP 10.102.246.215 <none> 80/TCP 3m14s app=rin
#可以在火墙中查看到策略信息
[root@master ~]# iptables -t nat -L | grep 80
DNAT tcp -- anywhere anywhere /* default/rin */ tcp to:10.244.2.30:80
ipvs模式
-
Service 是由 kube-proxy 组件,加上 iptables 来共同实现的
-
kube-proxy 通过 iptables 处理 Service 的过程,需要在宿主机上设置相当多的 iptables 规则,如果宿主机有大量的Pod,不断刷新iptables规则,会消耗大量的CPU资源
-
IPVS模式的service,可以使K8s集群支持更多量级的Pod
ipvs模式配置方式
1.在所有节点中安装ipvsadm:
bash
for i in 100 10 20 ;
do ssh root@172.25.254.$i 'yum install ipvsadm -y';
done
- 修改master节点的代理配置
cpp
[root@master ~]# kubectl -n kube-system edit cm kube-proxy
58 metricsBindAddress: ""
59 mode: "ipvs" #设置kube-proxy使用ipvs模式
60 nftables:
3 .重启pod,在pod运行时配置文件中采用默认配置,当改变配置文件后已经运行的pod状态不会变化,所以要重启pod
cpp
[root@master ~]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-2rdcf" deleted
pod "kube-proxy-f9rxr" deleted
pod "kube-proxy-j5l5s" deleted
[root@master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 172.25.254.100:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.2:9153 Masq 1 0 0
-> 10.244.0.3:9153 Masq 1 0 0
TCP 10.102.246.215:80 rr
-> 10.244.1.25:80 Masq 1 0 0
-> 10.244.2.30:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.2:53 Masq 1 0 0
-> 10.244.0.3:53 Masq 1 0 0
#切换ipvs模式后,kube-proxy会在宿主机上添加一个虚拟网卡:kube-ipvs0,
#并分配所有service IP
[root@master ~]# ip a | tail
inet6 fe80::a496:efff:fe3a:17e8/64 scope link
valid_lft forever preferred_lft forever
8: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 3e:c2:c3:f0:71:f8 brd ff:ff:ff:ff:ff:ff
inet 10.102.246.215/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
微服务类型详解
clusterip
特点:
clusterip模式只能在集群内访问,并对集群内的pod提供健康检测和自动发现功能
示例
cpp
[root@master ~]# cat myapp.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: rin
name: rin
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: timinglee
type: ClusterIP
#查看 CoreDNS 的 Service IP
[root@master ~]# kubectl get svc kube-dns -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 2d7h
#service创建后集群DNS提供解析
[root@master ~]# dig rin.default.svc.cluster.local @10.96.0.10
; <<>> DiG 9.16.23-RH <<>> rin.default.svc.cluster.local @10.96.0.10
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46369
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: fc2abb09975647ac (echoed)
;; QUESTION SECTION:
;rin.default.svc.cluster.local. IN A
;; ANSWER SECTION:
rin.default.svc.cluster.local. 30 IN A 10.102.246.215
;; Query time: 15 msec
;; SERVER: 10.96.0.10#53(10.96.0.10)
;; WHEN: Wed Aug 13 04:55:04 EDT 2025
;; MSG SIZE rcvd: 115
ClusterIP中的特殊模式headless
nodeport
loadbalancer
metalLB
官网:Installation :: MetalLB, bare metal load-balancer for Kubernetes

metalLB功能:为LoadBalancer分配vip
部署方式:
1.设置ipvs模式
cpp
[root@master metallb]# kubectl edit cm -n kube-system kube-proxy
59 mode: "ipvs"
44 strictARP: true
[root@master metallb]# kubectl -n kube-system get pods | awk '/kube-proxy/{system("kubectl -n kube-system delete pods "$1)}'
pod "kube-proxy-2wsl5" deleted
pod "kube-proxy-hbv2n" deleted
pod "kube-proxy-spgcm" deleted
2.下载部署文件
bash
wget https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml
3.修改文件中镜像地址,与harbor仓库路径保持一致
cpp
[root@k8s-master ~]# vim metallb-native.yaml
image: metallb/controller:v0.14.8
image: metallb/speaker:v0.14.8
加载镜像:
cpp
[root@master metallb]# docker load -i metalLB.tag.gz
f144bb4c7c7f: Loading layer [==================================================>] 327.7kB/327.7kB
49626df344c9: Loading layer [==================================================>] 40.96kB/40.96kB
945d17be9a3e: Loading layer [==================================================>] 2.396MB/2.396MB
4d049f83d9cf: Loading layer [==================================================>] 1.536kB/1.536kB
af5aa97ebe6c: Loading layer [==================================================>] 2.56kB/2.56kB
ac805962e479: Loading layer [==================================================>] 2.56kB/2.56kB
bbb6cacb8c82: Loading layer [==================================================>] 2.56kB/2.56kB
2a92d6ac9e4f: Loading layer [==================================================>] 1.536kB/1.536kB
1a73b54f556b: Loading layer [==================================================>] 10.24kB/10.24kB
f4aee9e53c42: Loading layer [==================================================>] 3.072kB/3.072kB
b336e209998f: Loading layer [==================================================>] 238.6kB/238.6kB
371134a463a4: Loading layer [==================================================>] 61.38MB/61.38MB
6e64357636e3: Loading layer [==================================================>] 13.31kB/13.31kB
Loaded image: quay.io/metallb/controller:v0.14.8
0b8392a2e3be: Loading layer [==================================================>] 2.137MB/2.137MB
3d5a6e3a17d1: Loading layer [==================================================>] 65.46MB/65.46MB
8311c2bd52ed: Loading layer [==================================================>] 49.76MB/49.76MB
4f4d43efeed6: Loading layer [==================================================>] 3.584kB/3.584kB
881ed6f5069a: Loading layer [==================================================>] 13.31kB/13.31kB
Loaded image: quay.io/metallb/speaker:v0.14.8
在项目仓库创建metallb目录:

上传镜像到harbor:
cpp
[root@master metallb]# docker tag quay.io/metallb/speaker:v0.14.8 www.rin.com/metallb/speaker:v0.14.8
[root@master metallb]# docker tag quay.io/metallb/controller:v0.14.8 www.rin.com/metallb/controller:v0.14.8
[root@master metallb]# docker push www.rin.com/metallb/speaker:v0.14.8
[root@master metallb]# docker push www.rin.com/metallb/controller:v0.14.8
部署服务
bash
kubectl apply -f metallb-native.yaml
kubectl -n metallb-system get pods
过程:
cpp
[root@master metallb]# kubectl apply -f metallb-native.yaml
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/servicel2statuses.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/metallb-webhook-cert created
service/metallb-webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
[root@master metallb]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-gjzvj 0/1 Running 0 6s
speaker-4dnxg 0/1 ContainerCreating 0 6s
speaker-8cctg 0/1 ContainerCreating 0 6s
speaker-jldqf 0/1 ContainerCreating 0 6s
[root@master metallb]# kubectl -n metallb-system get pods
NAME READY STATUS RESTARTS AGE
controller-65957f77c8-gjzvj 1/1 Running 0 22s
speaker-4dnxg 0/1 Running 0 22s
speaker-8cctg 0/1 Running 0 22s
speaker-jldqf 0/1 Running 0 22s
配置分配地址段
cpp
[root@master metallb]# vim configmap.yml
[root@master metallb]# cat configmap.yml
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool #地址池名称
namespace: metallb-system
spec:
addresses:
- 172.25.254.50-172.25.254.99 #修改为自己本地地址段
--- #两个不同的kind中间必须加分割
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: example
namespace: metallb-system
spec:
ipAddressPools:
- first-pool #使用地址池,调用前面的配置
# 修改 rin Service 类型为 LoadBalancer
kubectl patch svc rin -p '{"spec":{"type":"LoadBalancer"}}'
# 修改 rin Service 类型为 LoadBalancer
kubectl patch svc rin -p '{"spec":{"type":"LoadBalancer"}}'
#查看IP分配情况:
[root@master metallb]# kubectl get svc rin
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rin LoadBalancer 10.102.246.215 172.25.254.50 80:32709/TCP 69m
#通过分配地址从集群外访问服务
[root@master metallb]# curl 172.25.254.50
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
部署ingress
下载部署文件
bash
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml
创建项目目录:
上传ingress所需镜像到harbor
bash
docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.1 www.rin.com/ingress-nginx/kube-webhook-certgen:v1.6.1
docker tag registry.k8s.io/ingress-nginx/controller:v1.13.1 www.rin.com/ingress-nginx/controller:v1.13.1
docker push www.rin.com/ingress-nginx/kube-webhook-certgen:v1.6.1
docker push www.rin.com/ingress-nginx/controller:v1.13.1
安装ingress
cpp
[root@master ingress-1.13.1]# vim deploy.yaml
445 image: ingress-nginx/controller:v1.13.1
546 image: ingress-nginx/kube-webhook-certgen:v1.6.1
599 image: ingress-nginx/kube-webhook-certgen:v1.6.1
bash
kubectl apply -f deploy.yaml
cpp
[root@master ingress-1.13.1]# kubectl -n ingress-nginx get pods
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-85bc7f665d-n9s4s 1/1 Running 0 4m29s
[root@master ingress-1.13.1]# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.107.49.212 <none> 80:30405/TCP,443:31020/TCP 5m1s
ingress-nginx-controller-admission ClusterIP 10.96.147.88 <none> 443/TCP 5m1s
#修改微服务为loadbalancer
cpp
[root@master ingress-1.13.1]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49 type: LoadBalancer
[root@master ingress-1.13.1]# kubectl -n ingress-nginx get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.107.49.212 <pending> 80:30405/TCP,443:31020/TCP 6m32s
ingress-nginx-controller-admission ClusterIP 10.96.147.88 <none> 443/TCP 6m32s
测试ingress
#生成yaml文件
cpp
kubectl create ingress webcluster --rule '*/=rin-svc:80' --dry-run=client -o yaml > rin-ingress.yml
[root@master ingress-1.13.1]# cat rin-ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: null
name: webcluster
spec:
ingressClassName: nginx
rules:
- http:
paths:
- backend:
service:
name: rin-svc
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}
#建立ingress控制器
cpp
[root@master ingress-1.13.1]# kubectl apply -f rin-ingress.yml
ingress.networking.k8s.io/webcluster created
[root@master ingress-1.13.1]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
webcluster nginx * 172.25.254.10 80 68s
基于路径的访问
1.建立用于测试的控制器myapp
cpp
[root@master ingress-1.13.1]# cat myapp-v1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
replicas: 1
selector:
matchLabels:
app: myapp-v1
strategy: {}
template:
metadata:
labels:
app: myapp-v1
spec:
containers:
- image: myapp:v1
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v1
name: myapp-v1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v1
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp-v1
name: myapp-v1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v1
status:
loadBalancer: {}
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: myapp-v2
name: myapp-v2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v2
status:
loadBalancer: {}
[root@master ingress-1.13.1]# cat myapp-v2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
replicas: 1
selector:
matchLabels:
app: myapp-v2
strategy: {}
template:
metadata:
labels:
app: myapp-v2
spec:
containers:
- image: myapp:v2
name: myapp
---
apiVersion: v1
kind: Service
metadata:
labels:
app: myapp-v2
name: myapp-v2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: myapp-v2
cpp
[root@master ingress-1.13.1]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h46m
myapp-v1 ClusterIP 10.107.211.82 <none> 80/TCP 3m28s
myapp-v2 ClusterIP 10.96.65.27 <none> 80/TCP 97s
2.建立ingress的yaml
cpp
[root@master ingress-1.13.1]# cat ingress1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress1
spec:
ingressClassName: nginx
rules:
- host: www.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /v1
pathType: Prefix
- backend:
service:
name: myapp-v2
port:
number: 80
path: /v2
pathType: Prefix
应用:
bash
kubectl apply -f ingress1.yml
测试:
添加本地解析:
bash
echo 172.25.254.50 www.rin.com >> /etc/hosts
cpp
[root@master ingress-1.13.1]# curl www.rin.com/v1
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ingress-1.13.1]# curl www.rin.com/v2
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
cpp
#nginx.ingress.kubernetes.io/rewrite-target: / 的功能实现
[root@master ingress-1.13.1]# curl www.rin.com/v2/aaa
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
基于域名的访问
cpp
#添加本地解析
[root@master ingress-1.13.1]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100 master.rin.com
172.25.254.200 harbor.rin.com www.rin.com
172.25.254.10 node1.rin.com
172.25.254.20 node2.rin.com
172.25.254.50 www.rin.com myapp1.rin.com myapp2.rin.com
# 建立基于域名的yml文件
[root@master ingress-1.13.1]# cat ingress2.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress2
spec:
ingressClassName: nginx
rules:
- host: myapp1.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
- host: myapp2.rin.com
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
#利用文件建立ingress
kubectl apply -f ingress2.yml
[root@master ingress-1.13.1]# kubectl describe ingress ingress2
Name: ingress2
Labels: <none>
Namespace: default
Address: 172.25.254.10
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp1.rin.com
/ myapp-v1:80 (10.244.1.5:80)
myapp2.rin.com
/ myapp-v2:80 (10.244.2.8:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 2m6s (x3 over 5m36s) nginx-ingress-controller Scheduled for sync
#测试:
[root@master ingress-1.13.1]# curl myapp1.rin.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ingress-1.13.1]# curl myapp2.rin.com
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
建立tls加密
#建立证书
cpp
openssl req -newkey rsa:2048 -nodes -keyout tls.key -x509 -days 365 -subj "/CN=nginxsvc/O=nginxsvc" -out tls.crt
#建立加密资源类型secret
secret通常在kubernetes中存放敏感数据,他并不是一种加密方式
cpp
kubectl create secret tls web-tls-secret --key tls.key --cert tls.crt
secret/web-tls-secret created
[root@master ingress-1.13.1]# kubectl get secrets
NAME TYPE DATA AGE
web-tls-secret kubernetes.io/tls 2 9s
#建立ingress3基于tls认证的yml文件
cpp
[root@master ingress-1.13.1]# cat ingress3.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress3
spec:
tls:
- hosts:
- myapp-tls.rin.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@master ingress-1.13.1]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.100 master.rin.com
172.25.254.200 harbor.rin.com www.rin.com
172.25.254.10 node1.rin.com
172.25.254.20 node2.rin.com
172.25.254.50 www.rin.com myapp1.rin.com myapp2.rin.com myapp-tls.rin.com
测试:
cpp
[root@master ingress-1.13.1]# curl -k https://myapp-tls.rin.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ingress-1.13.1]# curl https://myapp-tls.rin.com
curl: (60) SSL certificate problem: self-signed certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
[root@master ingress-1.13.1]# curl -k https://myapp-tls.rin.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ingress-1.13.1]#
建立auth认证
cpp
#建立认证文件
[root@master ingress-1.13.1]# dnf install httpd-tools -y
[root@master ingress-1.13.1]# htpasswd -cm auth rin
New password:
Re-type new password:
Adding password for user rin
[root@master ingress-1.13.1]# cat auth
rin:$apr1$4qWC8T1y$fXRb/aB5m33pLGt5x2JRr0
#建立认证类型资源
[root@master ingress-1.13.1]# kubectl create secret generic auth-web --from-file auth
secret/auth-web created
[root@master ingress-1.13.1]# kubectl describe secrets auth-web
Name: auth-web
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
auth: 42 bytes
#建立ingress4基于用户认证的yaml文件
[root@master ingress-1.13.1]# vim ingress4.yml
[root@master ingress-1.13.1]# cat ingress4.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress4
spec:
tls:
- hosts:
- myapp-tls.rin.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
#建立ingress4
[root@master ingress-1.13.1]# kubectl apply -f ingress4.yml
ingress.networking.k8s.io/ingress4 created
[root@master ingress-1.13.1]# kubectl describe ingress ingress4
Name: ingress4
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.rin.com
Rules:
Host Path Backends
---- ---- --------
myapp-tls.rin.com
/ myapp-v1:80 (10.244.1.5:80)
Annotations: nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 11s nginx-ingress-controller Scheduled for sync
#测试:
[root@master ingress-1.13.1]# curl -k https://myapp-tls.rin.com
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>
[root@master ingress-1.13.1]# curl -k https://myapp-tls.rin.com -urin:123
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
rewrite重定向
cpp
#指定默认访问的文件到hostname.html上
[root@master ingress-1.13.1]# vim ingress5.yml
[root@master ingress-1.13.1]# cat ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress5
spec:
tls:
- hosts:
- myapp-tls.rin.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@master ingress-1.13.1]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/ingress5 created
[root@master ingress-1.13.1]# kubectl describe ingress ingress5
Name: ingress5
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
TLS:
web-tls-secret terminates myapp-tls.rin.com
Rules:
Host Path Backends
---- ---- --------
myapp-tls.rin.com
/ myapp-v1:80 (10.244.1.5:80)
Annotations: nginx.ingress.kubernetes.io/app-root: /hostname.html
nginx.ingress.kubernetes.io/auth-realm: Please input username and password
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-type: basic
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9s nginx-ingress-controller Scheduled for sync
#测试:
[root@master ingress-1.13.1]# curl -Lk https://myapp-tls.rin.com -urin:123
myapp-v1-7479d6c54d-4dkgr
[root@master ingress-1.13.1]# curl -Lk https://myapp-tls.rin.com/hostname.html -urin:123
myapp-v1-7479d6c54d-4dkgr
[root@master ingress-1.13.1]# curl -Lk https://myapp-tls.rin.com/rin/hostname.html -urin:123
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.12.2</center>
</body>
</html>
#解决重定向路径问题
[root@master ingress-1.13.1]# vim ingress5.yml
[root@master ingress-1.13.1]# cat ingress5.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/auth-type: basic
nginx.ingress.kubernetes.io/auth-secret: auth-web
nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
name: ingress5
spec:
tls:
- hosts:
- myapp-tls.rin.com
secretName: web-tls-secret
ingressClassName: nginx
rules:
- host: myapp-tls.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /(.*)/(.*)
pathType: ImplementationSpecific
#重新加载配置:
[root@master ingress-1.13.1]# kubectl apply -f ingress5.yml
ingress.networking.k8s.io/ingress5 configured
测试:
[root@master ingress-1.13.1]# curl -Lk https://myapp-tls.rin.com/rin/hostname.html -urin:123
myapp-v1-7479d6c54d-4dkgr
[root@master ingress-1.13.1]# curl -Lk https://myapp-tls.rin.com/rin/hostname.html/inin/ -urin:123
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ingress-1.13.1]#
Canary金丝雀发布

金丝雀发布:
金丝雀发布(Canary Release)也称为灰度发布,是一种软件发布策略。
主要目的是在将新版本的软件全面推广到生产环境之前,先在一小部分用户或服务器上进行测试和验证,以降低因新版本引入重大问题而对整个系统造成的影响。
是一种Pod的发布方式。金丝雀发布采取先添加、再删除的方式,保证Pod的总量不低于期望值。并且在更新部分Pod后,暂停更新,当确认新Pod版本运行正常后再进行其他版本的Pod的更新。
Canary发布方式
header > ccokie > weiht
其中header和weiht中的最多
基于header(http包头)灰度

-
通过Annotaion扩展
-
创建灰度ingress,配置灰度头部key以及value
-
灰度流量验证完毕后,切换正式ingress到新版本
-
之前我们在做升级时可以通过控制器做滚动更新,默认25%利用header可以使升级更为平滑,通过key 和vule 测试新的业务体系是否有问题。
cpp
#建立版本1的ingress
[root@master ingress-1.13.1]# cat ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
name: myapp-v1-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
[root@master ingress-1.13.1]# kubectl describe ingress myapp-v1-ingress
Name: myapp-v1-ingress
Labels: <none>
Namespace: default
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.rin.com
/ myapp-v1:80 (10.244.1.5:80)
Annotations: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 5s nginx-ingress-controller Scheduled for sync
#建立基于header的ingress
[root@master ingress-1.13.1]# cat ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "version"
nginx.ingress.kubernetes.io/canary-by-header-value: "2"
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.rin.com
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
[root@master ingress-1.13.1]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress unchanged
[root@master ingress-1.13.1]# kubectl describe ingress myapp-v2-ingress
Name: myapp-v2-ingress
Labels: <none>
Namespace: default
Address: 172.25.254.10
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
myapp.rin.com
/ myapp-v2:80 (10.244.2.8:80)
Annotations: nginx.ingress.kubernetes.io/canary: true
nginx.ingress.kubernetes.io/canary-by-header: version
nginx.ingress.kubernetes.io/canary-by-header-value: 2
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 9m25s (x2 over 9m47s) nginx-ingress-controller Scheduled for sync
测试:
[root@master ingress-1.13.1]# curl myapp.rin.com
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@master ingress-1.13.1]# curl -H "version: 2" myapp.rin.com
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
基于权重的灰度发布

-
通过Annotaion拓展
-
创建灰度ingress,配置灰度权重以及总权重
-
灰度流量验证完毕后,切换正式ingress到新版本
cpp
#基于权重的灰度发布
[root@master ingress-1.13.1]# cat ingress8.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
nginx.ingress.kubernetes.io/canary-weight-total: "100"
name: myapp-v2-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.rin.com
http:
paths:
- backend:
service:
name: myapp-v2
port:
number: 80
path: /
pathType: Prefix
[root@master ingress-1.13.1]# cat ingress7.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
name: myapp-v1-ingress
spec:
ingressClassName: nginx
rules:
- host: myapp.rin.com
http:
paths:
- backend:
service:
name: myapp-v1
port:
number: 80
path: /
pathType: Prefix
#更新配置
[root@master ingress-1.13.1]# kubectl apply -f ingress8.yml
ingress.networking.k8s.io/myapp-v2-ingress created
#编辑测试脚本
[root@master ingress-1.13.1]# cat check_ingress.sh
#!/bin/bash
v1=0
v2=0
for (( i=0; i<100; i++))
do
response=`curl -s myapp.rin.com |grep -c v1`
v1=`expr $v1 + $response`
v2=`expr $v2 + 1 - $response`
done
echo "v1:$v1, v2:$v2"
#测试:
[root@master ingress-1.13.1]# sh check_ingress.sh
v1:83, v2:17
configmap
configmap的功能
-
configMap用于保存配置数据,以键值对形式存储。
-
configMap 资源提供了向 Pod 注入配置数据的方法。
-
镜像和配置文件解耦,以便实现镜像的可移植性和可复用性。
-
etcd限制了文件大小不能超过1M
configmap的使用场景
-
填充环境变量的值
-
设置容器内的命令行参数
-
填充卷的配置文件
configmap创建方式
字面值创建
创建一个名为 rin-config
的 ConfigMap,并通过 --from-literal
参数直接设置了两个键值对:fname=rin
和 name=rin
。
cpp
[root@master ~]# kubectl create cm rin-config --from-literal fname=rin --from-literal name=rin
configmap/rin-config created
[root@master ~]# kubectl describe cm rin-config
Name: rin-config
Namespace: default
Labels: <none>
Annotations: <none>
Data #键值信息显示
====
name:
----
rin
fname:
----
rin
BinaryData
====
Events: <none>
[root@master ~]#
通过文件创建
创建一个名为 rin2-config
的 ConfigMap,其中包含 /etc/resolv.conf
文件的内容
--from-file /etc/resolv.conf
表示从指定文件读取内容创建 ConfigMap- ConfigMap 中会生成一个与源文件同名的键(即
resolv.conf
),其值为该文件的内容
cpp
[root@master ~]# cat /etc/resolv.conf
# Generated by NetworkManager
search rin.com
nameserver 114.114.114.114
[root@master ~]# kubectl create cm rin2-config --from-file /etc/resolv.conf
configmap/rin2-config created
[root@master ~]# kubectl describe cm rin2-config
Name: rin2-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
resolv.conf:
----
# Generated by NetworkManager
search rin.com
nameserver 114.114.114.114
BinaryData
====
Events: <none>
通过目录创建
通过目录 rinconfig/
中的文件创建一个名为 rin3-config
的 ConfigMap。这个操作会将目录下的所有文件(fstab
和 rc.local
)都添加到 ConfigMap 中,每个文件会作为 ConfigMap 的一个键,文件内容则作为对应的值。
--from-file rinconfig/
表示从指定目录读取所有文件来创建 ConfigMap- 最终 ConfigMap 会包含两个键:
fstab
和rc.local
,分别对应原文件的内容
cpp
[root@master ~]# mkdir rinconfig
[root@master ~]# cp /etc/fstab /etc/rc.local rinconfig/
[root@master ~]# kubectl create cm rin3-config --from-file rinconfig/
configmap/rin3-config created
[root@master ~]# kubectl describe cm rin3-config
Name: rin3-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
fstab:
----
#
# /etc/fstab
# Created by anaconda on Sun Aug 10 19:39:28 2025
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=84f25d20-18e1-4478-a2ab-fad7856e6a0b / xfs defaults 0 0
UUID=2f79294d-0fd8-4375-9f48-96cc02c63620 /boot xfs defaults 0 0
#UUID=7667254b-9823-4ca4-96e2-11cc8917c419 none swap defaults 0 0
rc.local:
----
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
touch /var/lock/subsys/local
mount /dev/sr0 /var/repo
BinaryData
====
Events: <none>
通过yaml文件创建
创建一个名为 rin4-config
的 ConfigMap,但不会实际提交到 Kubernetes 集群(--dry-run=client
),而是将配置以 YAML 格式输出到 rin-config.yaml
文件中。
--from-literal
:直接指定键值对(db_host=172.25.254.100
和db_port=3306
)--dry-run=client
:仅在客户端模拟执行,不实际创建资源-o yaml > rin-config.yaml
:将结果以 YAML 格式输出到文件
cpp
[root@master ~]# kubectl create cm rin4-config --from-literal db_host=172.25.254.100 --from-literal db_port=3306 --dry-run=client -o yaml > rin-config.yaml
[root@master ~]# cat rin-config.yaml
apiVersion: v1
data:
db_host: 172.25.254.100
db_port: "3306"
kind: ConfigMap
metadata:
creationTimestamp: null
name: rin4-config
[root@master ~]# kubectl apply -f rin-config.yaml
configmap/rin4-config created
[root@master ~]# kubectl describe cm rin4-config
Name: rin4-config
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
db_host:
----
172.25.254.100
db_port:
----
3306
BinaryData
====
Events: <none>
cpp
[root@master packages]# ls
busybox-latest.tar.gz centos-7.tar.gz ingress-1.13.1 mysql-8.0.tar packages perl-5.34.tar.gz phpmyadmin-latest.tar.gz
[root@master packages]# docker load -i mysql-8.0.tar
1355aaece24a: Loading layer [==================================================>] 116.9MB/116.9MB
0df7536aeacd: Loading layer [==================================================>] 11.26kB/11.26kB
890407b3e3c8: Loading layer [==================================================>] 2.359MB/2.359MB
dbbfcab5c162: Loading layer [==================================================>] 16.94MB/16.94MB
e1fb8ece750b: Loading layer [==================================================>] 6.656kB/6.656kB
9b2614373417: Loading layer [==================================================>] 3.072kB/3.072kB
8650ab4a9279: Loading layer [==================================================>] 147.4MB/147.4MB
8da4a3ff29de: Loading layer [==================================================>] 3.072kB/3.072kB
88e9ddacc888: Loading layer [==================================================>] 515.1MB/515.1MB
23ecc100dc88: Loading layer [==================================================>] 17.41kB/17.41kB
0a931eb72e4f: Loading layer [==================================================>] 1.536kB/1.536kB
Loaded image: mysql:8.0
[root@master packages]# docker load -i phpmyadmin-latest.tar.gz
9853575bc4f9: Loading layer [==================================================>] 77.83MB/77.83MB
4cae4ea97049: Loading layer [==================================================>] 3.584kB/3.584kB
7f0d23b78477: Loading layer [==================================================>] 320.2MB/320.2MB
8f42af1dd50e: Loading layer [==================================================>] 5.12kB/5.12kB
7285b46fc0b1: Loading layer [==================================================>] 51.28MB/51.28MB
886076bbd0e5: Loading layer [==================================================>] 9.728kB/9.728kB
fe49c1c8ccdc: Loading layer [==================================================>] 7.68kB/7.68kB
c98461c57e2d: Loading layer [==================================================>] 13.41MB/13.41MB
4646cbc7a84d: Loading layer [==================================================>] 4.096kB/4.096kB
7183cf0cacbe: Loading layer [==================================================>] 49.48MB/49.48MB
923288b71444: Loading layer [==================================================>] 12.8kB/12.8kB
eb4f3a0b1a71: Loading layer [==================================================>] 4.608kB/4.608kB
43cd9aa62af4: Loading layer [==================================================>] 4.608kB/4.608kB
9f9985f7ecbd: Loading layer [==================================================>] 9.134MB/9.134MB
25d63a36933d: Loading layer [==================================================>] 6.656kB/6.656kB
13ccf69b5807: Loading layer [==================================================>] 53.35MB/53.35MB
a65e8a0ad246: Loading layer [==================================================>] 8.192kB/8.192kB
26f3cdf867bf: Loading layer [==================================================>] 3.584kB/3.584kB
Loaded image: phpmyadmin:latest
[root@master packages]# docker tag mysql:8.0 www.rin.com/library/mysql:8.0
[root@master packages]# docker tag phpmyadmin:latest www.rin.com/library/phpmyadmin:latest
[root@master packages]# docker push www.rin.com/library/tag phpmyadmin:latest
[root@master packages]# docker push www.rin.com/library/phpmyadmin:latest
[root@master ~]# cat configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: phpmyadmin
data:
MYSQL_ROOT_PASSWORD: "rin123"
MYSQL_DATABASE: "mydb"
PMA_HOST: "127.0.0.1"
PMA_PORT: "3306"
[root@master ~]# cat phpmysqladmin.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: phpmysqladmin
name: phpmysqladmin
spec:
containers:
- image: mysql:8.0
name: mysql
ports:
- containerPort: 3306
envFrom:
- configMapRef:
name: phpmyadmin
- image: phpmyadmin:latest
name: phpmyadmin
ports:
- containerPort: 80
protocol: TCP
hostPort: 80
envFrom:
- configMapRef:
name: phpmyadmin
[root@master ~]# kubectl apply -f configmap.yaml
configmap/phpmyadmin created
[root@master ~]# kubectl get cm phpmyadmin
NAME DATA AGE
phpmyadmin 4 92s
[root@master ~]# kubectl describe cm phpmyadmin
Name: phpmyadmin
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
MYSQL_DATABASE:
----
mydb
MYSQL_ROOT_PASSWORD:
----
rin
PMA_HOST:
----
localhost
PMA_PORT:
----
3306
BinaryData
====
Events: <none>
测试:
访问网页,账号root,密码rin123

configmap的使用方式
-
通过环境变量的方式直接传递给pod
-
通过pod的 命令行运行方式
-
作为volume的方式挂载到pod内
使用configmap填充环境变量
cpp
#将cm中的内容映射为指定变量
[root@master ~]# cat testpod.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
env:
- name: key1
valueFrom:
configMapKeyRef:
name: rin4-config
key: db_host
- name: key2
valueFrom:
configMapKeyRef:
name: rin4-config
key: db_port
restartPolicy: Never
[root@master ~]# cat rin-config.yaml
apiVersion: v1
data:
db_host: 172.25.254.100
db_port: "3306"
kind: ConfigMap
metadata:
creationTimestamp: null
name: rin4-config
[root@master ~]# kubectl apply -f testpod.yml
pod/testpod created
[root@master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.107.211.82
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.96.65.27
HOME=/
MYAPP_V1_PORT=tcp://10.107.211.82:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.96.65.27:80
MYAPP_V1_PORT_80_TCP_ADDR=10.107.211.82
MYAPP_V2_PORT_80_TCP_ADDR=10.96.65.27
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V2_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
key1=172.25.254.100
key2=3306
MYAPP_V1_PORT_80_TCP=tcp://10.107.211.82:80
MYAPP_V2_PORT_80_TCP=tcp://10.96.65.27:80
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
#把cm中的值直接映射为变量
[root@master ~]# cat testpod2.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- env
envFrom:
- configMapRef:
name: rin4-config
restartPolicy: Never
#查看日志
[root@master ~]# kubectl logs pods/testpod
KUBERNETES_PORT=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT=443
MYAPP_V1_SERVICE_HOST=10.107.211.82
HOSTNAME=testpod
SHLVL=1
MYAPP_V2_SERVICE_HOST=10.96.65.27
HOME=/
db_port=3306
MYAPP_V1_PORT=tcp://10.107.211.82:80
MYAPP_V1_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.96.65.27:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V1_PORT_80_TCP_ADDR=10.107.211.82
MYAPP_V2_PORT_80_TCP_ADDR=10.96.65.27
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
MYAPP_V1_PORT_80_TCP_PROTO=tcp
MYAPP_V2_PORT_80_TCP_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.107.211.82:80
MYAPP_V2_PORT_80_TCP=tcp://10.96.65.27:80
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
PWD=/
KUBERNETES_SERVICE_HOST=10.96.0.1
db_host=172.25.254.100
#在pod命令行中使用变量
[root@master ~]# cat testpod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
containers:
- image: busyboxplus:latest
name: testpod
command:
- /bin/sh
- -c
- echo ${db_host} ${db_port}
envFrom:
- configMapRef:
name: rin4-config
restartPolicy: Never
[root@master ~]# kubectl apply -f testpod3.yml
pod/testpod created
[root@master ~]# kubectl logs pods/testpod
172.25.254.100 3306
通过数据卷使用configmap
secrets配置管理
-
Secret 对象类型用来保存敏感信息,例如密码、OAuth 令牌和 ssh key。
-
敏感信息放在 secret 中比放在 Pod 的定义或者容器镜像中来说更加安全和灵活
-
Pod 可以用两种方式使用 secret:
-
作为 volume 中的文件被挂载到 pod 中的一个或者多个容器里。
-
当 kubelet 为 pod 拉取镜像时使用。
-
-
Secret的类型:
-
Service Account:Kubernetes 自动创建包含访问 API 凭据的 secret,并自动修改 pod 以使用此类型的 secret。
-
Opaque:使用base64编码存储信息,可以通过base64 --decode解码获得原始数据,因此安全性弱。
-
kubernetes.io/dockerconfigjson:用于存储docker registry的认证信息
-
secrets的创建
在创建secrets时我们可以用命令的方法或者yaml文件的方法
从文件创建
cpp
[root@master ~]# echo -n rinleren > username.txt
[root@master ~]# echo -n rin > password.txt
[root@master ~]# kubectl create secret generic userlist --from-file username.txt --from-file password.txt
secret/userlist created
[root@master ~]# kubectl get secrets userlist -o yaml
apiVersion: v1
data:
password.txt: cmlu
username.txt: cmlubGVyZW4=
kind: Secret
metadata:
creationTimestamp: "2025-08-16T08:59:37Z"
name: userlist
namespace: default
resourceVersion: "87238"
uid: ffb3d61c-d56f-456e-82c1-568e7bd42894
type: Opaque
编写yaml文件
cpp
[root@master ~]# echo -n rinleren | base64
cmlubGVyZW4=
[root@master ~]# echo -n rin | base64
cmlu
[root@master ~]# cat userlist.yml
apiVersion: v1
kind: Secret
metadata:
creationTimestamp: null
name: userlist
type: Opaque
data:
username: cmlubGVyZW4=
password: cmlu
[root@master ~kubectl apply -f userlist.yml
secret/userlist configured
[root@master ~]# kubectl describe secret userlist
Name: userlist
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
password: 3 bytes
password.txt: 3 bytes
username: 8 bytes
username.txt: 8 bytes
字段 | 说明 | |
---|---|---|
apiVersion: v1 |
遵循 Kubernetes v1 API 版本。 | |
kind: Secret |
资源类型为 Secret,用于存储敏感数据(如密码、令牌等)。 | |
metadata.name |
Secret 的名称为 userlist ,用于在 Pod 中引用。 |
|
type: Opaque |
通用类型,适用于存储任意键值对(非结构化数据)。 | |
data |
存储敏感数据的区域,值必须经过 Base64 编码(避免明文传输)。 | |
username: cmlubGVyZW4= |
Base64 编码后的用户名,解码后为 rinleen (通过 `echo"cmlubGVyZW4=" |
base64 -d` 验证)。 |
password: cmlu |
Base64 编码后的密码,解码后为 rin (通过 `echo"cmlu" |
base64 -d` 验证)。 |
Secret的使用方法
将Secret挂载到Volume中
cpp
[root@master ~]# kubectl run nginx --image nginx --dry-run=client -o yaml > pod1.yaml
#向固定路径映射
[root@master ~]# cat pod1.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
[root@master ~]# kubectl apply -f pod1.yaml
pod/nginx created
[root@master ~]# kubectl exec pods/nginx -it -- /bin/bash
root@nginx:/# cat /secret/
cat: /secret/: Is a directory
root@nginx:/# cd /secret/
root@nginx:/secret# ls
password password.txt username username.txt
root@nginx:/secret# cat password
rin
root@nginx:/secret# cat username
rinleren
向指定路径映射 secret 密钥
cpp
#向指定路径映射
[root@master testfile]# vim pod2.yml
[root@master testfile]# cat pod2.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx1
name: nginx1
spec:
containers:
- image: nginx
name: nginx1
volumeMounts:
- name: secrets
mountPath: /secret
readOnly: true
volumes:
- name: secrets
secret:
secretName: userlist
items:
- key: username
path: my-users/username
[root@master testfile]# kubectl apply -f pod2.yml
pod/nginx1 created
[root@master testfile]# kubectl exec pods/nginx1 -it -- /bin/bash
root@nginx1:/# cd secret/
root@nginx1:/secret# ls
my-users
root@nginx1:/secret# cd my-users
root@nginx1:/secret/my-users# ls
username
root@nginx1:/secret/my-users# cat username
rinleren
将Secret设置为环境变量
cpp
[root@master testfile]# vim pod3.yaml
[root@master testfile]# cat pod3.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- image: busybox
name: busybox
command:
- /bin/sh
- -c
- env
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: userlist
key: username
- name: PASS
valueFrom:
secretKeyRef:
name: userlist
key: password
restartPolicy: Never
[root@master testfile]# kubectl apply -f pod3.yaml
pod/busybox created
[root@master testfile]# kubectl logs pods/busybox
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=busybox
MYAPP_V1_SERVICE_HOST=10.107.211.82
MYAPP_V2_SERVICE_HOST=10.96.65.27
SHLVL=1
HOME=/root
MYAPP_V1_SERVICE_PORT=80
MYAPP_V1_PORT=tcp://10.107.211.82:80
MYAPP_V2_SERVICE_PORT=80
MYAPP_V2_PORT=tcp://10.96.65.27:80
MYAPP_V1_PORT_80_TCP_ADDR=10.107.211.82
USERNAME=rinleren
MYAPP_V2_PORT_80_TCP_ADDR=10.96.65.27
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
MYAPP_V1_PORT_80_TCP_PORT=80
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
MYAPP_V1_PORT_80_TCP_PROTO=tcp
MYAPP_V2_PORT_80_TCP_PORT=80
KUBERNETES_PORT_443_TCP_PORT=443
MYAPP_V2_PORT_80_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PROTO=tcp
MYAPP_V1_PORT_80_TCP=tcp://10.107.211.82:80
MYAPP_V2_PORT_80_TCP=tcp://10.96.65.27:80
PASS=rin
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/
存储docker registry的认证信息
建立私有仓库并上传镜像:
登录仓库,并上传镜像:
cpp
[root@master testfile]# docker login www.rin.com
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credential-stores
Login Succeeded
上传镜像:
[root@master packages]# docker load -i game2048.tar.gz
[root@master testfile]# docker tag timinglee/game2048:latest
[root@master testfile]# docker push www.rin.com/test/game2048:latest
#建立用于dockers认证的secret:
cpp
[root@master testfile]# kubectl create secret docker-registry docker-auth --docker-server www.rin.com --docker-username admin --docker-password rin --docker-email rin@rin.com
secret/docker-auth created
cpp
[root@master testfile]# vim pod3.yml
[root@master testfile]# cat pod3.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: game2048
name: game2048
spec:
containers:
- image: www.rin.com/test/game2048:latest
name: game2048
imagePullSecrets:
- name: docker-auth
测试:
[root@master testfile]# kubectl get pods
NAME READY STATUS RESTARTS AGE
[root@master testfile]# kubectl apply -f pod3.yml
pod/game2048 created
[root@master testfile]# kubectl get pods
NAME READY STATUS RESTARTS AGE
game2048 1/1 Running 0 9s
volumes配置管理
-
容器中文件在磁盘上是临时存放的,这给容器中运行的特殊应用程序带来一些问题
-
当容器崩溃时,kubelet将重新启动容器,容器中的文件将会丢失,因为容器会以干净的状态重建。
-
当在一个 Pod 中同时运行多个容器时,常常需要在这些容器之间共享文件。
-
Kubernetes 卷具有明确的生命周期与使用它的 Pod 相同
-
卷比 Pod 中运行的任何容器的存活期都长,在容器重新启动时数据也会得到保留
-
当一个 Pod 不再存在时,卷也将不再存在。
-
Kubernetes 可以支持许多类型的卷,Pod 也能同时使用任意数量的卷。
-
卷不能挂载到其他卷,也不能与其他卷有硬链接。 Pod 中的每个容器必须独立地指定每个卷的挂载位置。
kubernets支持的卷的类型
k8s支持的卷的类型如下:
-
awsElasticBlockStore 、azureDisk、azureFile、cephfs、cinder、configMap、csi
-
downwardAPI、emptyDir、fc (fibre channel)、flexVolume、flocker
-
gcePersistentDisk、gitRepo (deprecated)、glusterfs、hostPath、iscsi、local、
-
nfs、persistentVolumeClaim、projected、portworxVolume、quobyte、rbd
-
scaleIO、secret、storageos、vsphereVolume
emptyDir卷
功能:
当Pod指定到某个节点上时,首先创建的是一个emptyDir卷,并且只要 Pod 在该节点上运行,卷就一直存在。卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,但是这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会永久删除
emptyDir 的使用场景:
-
缓存空间,例如基于磁盘的归并排序。
-
耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
-
在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。
cpp
[root@master testfile]# cat pod11.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: busyboxplus:latest
name: vm1
command:
- /bin/sh
- -c
- sleep 30000000
volumeMounts:
- mountPath: /cache
name: cache-vol
- image: nginx:latest
name: vm2
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
emptyDir:
medium: Memory
sizeLimit: 100Mi
[root@master testfile]# kubectl apply -f pod11.yml
pod/vol1 created
#查看pod中卷的使用情况
[root@master testfile]# kubectl describe pods vol1
Name: vol1
Namespace: default
Priority: 0
Service Account: default
Node: node2.rin.com/172.25.254.20
Start Time: Sat, 16 Aug 2025 22:42:14 -0400
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.2.4
IPs:
IP: 10.244.2.4
Containers:
vm1:
Container ID: docker://bcf468127e9dd72a80dc01349205950a29e1500488f303c27769f0f373eadbe8
Image: busyboxplus:latest
Image ID: docker-pullable://busyboxplus@sha256:9d1c242c1fd588a1b8ec4461d33a9ba08071f0cc5bb2d50d4ca49e430014ab06
Port: <none>
Host Port: <none>
Command:
/bin/sh
-c
sleep 30000000
State: Running
Started: Sat, 16 Aug 2025 22:42:15 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/cache from cache-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n89t9 (ro)
vm2:
Container ID: docker://9eb449909f698c7607f5e05f0a1991b3cc83b83dfa0402d7625168813d1f51d6
Image: nginx:latest
Image ID: docker-pullable://nginx@sha256:127262f8c4c716652d0e7863bba3b8c45bc9214a57d13786c854272102f7c945
Port: <none>
Host Port: <none>
State: Running
Started: Sat, 16 Aug 2025 22:42:15 -0400
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/usr/share/nginx/html from cache-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n89t9 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
cache-vol:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: 100Mi
kube-api-access-n89t9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12s default-scheduler Successfully assigned default/vol1 to node2.rin.com
Normal Pulling 12s kubelet Pulling image "busyboxplus:latest"
Normal Pulled 11s kubelet Successfully pulled image "busyboxplus:latest" in 185ms (185ms including waiting). Image size: 12855024 bytes.
Normal Created 11s kubelet Created container vm1
Normal Started 11s kubelet Started container vm1
Normal Pulling 11s kubelet Pulling image "nginx:latest"
Normal Pulled 11s kubelet Successfully pulled image "nginx:latest" in 168ms (168ms including waiting). Image size: 187694648 bytes.
Normal Created 11s kubelet Created container vm2
Normal Started 11s kubelet Started container vm2
测试效果:
[root@master testfile]# kubectl exec -it pods/vol1 -c vm1 -- /bin/sh
/ # cd /cache/
/cache # ls
/cache # curl localhost
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
/cache # echo rinleren > index.html
/cache # curl localhost
rinleren
/cache # dd if=/dev/zero of=bigfile bs=1M count=101
dd: writing 'bigfile': No space left on device
101+0 records in
99+1 records out
1.两个容器共享同一个内存卷,这意味着:
- 在 vm1 的 /cache 目录中创建的文件,会自动出现在 vm2 的 /usr/share/nginx/html 目录中
- 所有数据都存储在内存中,读写速度快,但 Pod 销毁后数据会丢失
- 存储使用受到 100Mi 的限制
2.测试内容
- 在
/cache
目录下创建index.html
并写入内容 "rinleren" - 再次访问
localhost
成功返回了写入的内容,验证了两个容器通过cache-vol
卷共享数据的功能
hostpath卷
功能:
hostPath 卷能将主机节点文件系统上的文件或目录挂载到您的 Pod 中,不会因为pod关闭而被删除
hostPath 的一些用法
-
运行一个需要访问 Docker 引擎内部机制的容器,挂载 /var/lib/docker 路径。
-
在容器中运行 cAdvisor(监控) 时,以 hostPath 方式挂载 /sys。
-
允许 Pod 指定给定的 hostPath 在运行 Pod 之前是否应该存在,是否应该创建以及应该以什么方式存在
hostPath的安全隐患
-
具有相同配置(例如从 podTemplate 创建)的多个 Pod 会由于节点上文件的不同而在不同节点上有不同的行为。
-
当 Kubernetes 按照计划添加资源感知的调度时,这类调度机制将无法考虑由 hostPath 使用的资源。
-
基础主机上创建的文件或目录只能由 root 用户写入。您需要在 特权容器 中以 root 身份运行进程,或者修改主机上的文件权限以便容器能够写入 hostPath 卷。
cpp
[root@master testfile]# vim pod22.yml
[root@master testfile]# cat pod22.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
hostPath:
path: /pod-data
type: DirectoryOrCreate #当/data目录不存在时自动建立
应用配置:
[root@master testfile]# kubectl apply -f pod22.yml
pod/vol1 created
[root@master testfile]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
vol1 1/1 Running 0 14s 10.244.2.5 node2.rin.com <none> <none>
测试:
[root@master testfile]# curl 10.244.2.5
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
在node2的pod-data目录下编辑默认发布文件:
[root@node2 ~]# ls /
afs boot dev home lib64 mnt pod-data root sbin sys usr
bin config_rhel9.sh etc lib media opt proc run srv tmp var
[root@node2 ~]# echo rinleren > /pod-data/index.html
在master测试:
[root@master testfile]# curl 10.244.2.5
rinleren
#当pod被删除后hostPath不会被清理
[root@master testfile]# kubectl delete -f pod22.yml
pod "vol1" deleted
回到node2再次查看:
[root@node2 ~]# ls /pod-data/
index.html
nfs卷
NFS 卷允许将一个现有的 NFS 服务器上的目录挂载到 Kubernetes 中的 Pod 中。这对于在多个 Pod 之间共享数据或持久化存储数据非常有用
例如,如果有多个容器需要访问相同的数据集,或者需要将容器中的数据持久保存到外部存储,NFS 卷可以提供一种方便的解决方案。
部署一台nfs共享主机并在所有k8s节点中安装nfs-utils
cpp
[root@master testfile]# for i in 100 10 20 ;do ssh root@172.25.254.$i "dnf install nfs-utils -y";done
[root@master testfile]# for i in 100 10 20 ;do ssh root@172.25.254.$i "systemctl enable --now nfs-server.service";done
root@172.25.254.100's password:
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
root@172.25.254.10's password:
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
root@172.25.254.20's password:
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
#部署nfs主机
[root@harbor harbor]# dnf install nfs-utils -y
[root@harbor harbor]# systemctl enable --now nfs-server.service
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /usr/lib/systemd/system/nfs-server.service.
[root@harbor harbor]# cat /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@harbor harbor]# mkdir -p /nfsdata
[root@harbor harbor]# chmod 777 /nfsdata
[root@harbor harbor]# exportfs -rv
exporting *:/nfsdata
[root@harbor harbor]# showmount -e
Export list for harbor.rin.com:
/nfsdata *
部署nfs卷
cpp
[root@master testfile]# vim pod333.yml
[root@master testfile]# cat pod333.yml
apiVersion: v1
kind: Pod
metadata:
name: vol1
spec:
containers:
- image: nginx:latest
name: vm1
volumeMounts:
- mountPath: /usr/share/nginx/html
name: cache-vol
volumes:
- name: cache-vol
nfs:
server: 172.25.254.200
path: /nfsdata
[root@master testfile]# kubectl apply -f pod333.yml
pod/vol1 created
测试:
[root@master testfile]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-v1-7479d6c54d-djpv5 1/1 Running 0 165m 10.244.1.2 node1.rin.com <none> <none>
myapp-v2-7cd6d597d-l4nz4 1/1 Running 0 165m 10.244.1.5 node1.rin.com <none> <none>
vol1 1/1 Running 0 26s 10.244.2.9 node2.rin.com <none> <none>
web-0 1/1 Running 0 142m 10.244.2.2 node2.rin.com <none> <none>
web-1 1/1 Running 0 141m 10.244.1.6 node1.rin.com <none> <none>
[root@master testfile]# curl 10.244.2.9
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>
在nfs主机添加默认发布文件:
[root@harbor harbor]# echo rinleren > /nfsdata/index.html
再回到master测试:
[root@master testfile]# curl 10.244.2.9
rinleren
PersistentVolume持久卷
静态持久卷pv与静态持久卷声明pvc
PersistentVolume(持久卷,简称PV)
-
pv是集群内由管理员提供的网络存储的一部分。
-
PV也是集群中的一种资源。是一种volume插件,
-
但是它的生命周期却是和使用它的Pod相互独立的。
-
PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节
-
pv有两种提供方式:静态和动态
-
静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,它们存在于Kubernetes API中,并可用于存储使用
-
动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass
-
PersistentVolumeClaim(持久卷声明,简称PVC)
-
是用户的一种存储请求
-
它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源
-
Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式持久卷配置
-
PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态
volumes访问模式
-
ReadWriteOnce -- 该volume只能被单个节点以读写的方式映射
-
ReadOnlyMany -- 该volume可以被多个节点以只读方式映射
-
ReadWriteMany -- 该volume可以被多个节点以读写的方式映射
-
在命令行中,访问模式可以简写为:
- RWO - ReadWriteOnce
-
ROX - ReadOnlyMany
-
RWX -- ReadWriteMany
volumes回收策略
-
Retain:保留,需要手动回收
-
Recycle:回收,自动删除卷中数据(在当前版本中已经废弃)
-
Delete:删除,相关联的存储资产,如AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷都会被删除
注意:
!NOTE
只有NFS和HostPath支持回收利用
AWS EBS,GCE PD,Azure Disk,or OpenStack Cinder卷支持删除操作。
volumes状态说明
-
Available 卷是一个空闲资源,尚未绑定到任何申领
-
Bound 该卷已经绑定到某申领
-
Released 所绑定的申领已被删除,但是关联存储资源尚未被集群回收
-
Failed 卷的自动回收操作失败
静态pv实例:
配置分析
1. PV 配置(pv.yml
)
PV 名称 | 容量 | 访问模式(accessModes) | 存储类 | NFS 路径 | 回收策略 |
---|---|---|---|---|---|
pv1 | 5Gi | ReadWriteOnce(RWO) | storageClassName: nfs | /nfsdata/pv1 | Retain |
pv2 | 15Gi | ReadWriteMany(RWX) | storageClassName: nfs | /nfsdata/pv2 | Retain |
pv3 | 25Gi | ReadOnlyMany(ROX) | storageClassName: nfs | /nfsdata/pv3 | Retain |
-
访问模式说明:
ReadWriteOnce
(RWO):仅允许单个节点挂载读写ReadWriteMany
(RWX):允许多个节点同时挂载读写(适合多 Pod 共享数据)ReadOnlyMany
(ROX):允许多个节点同时挂载只读
-
回收策略 :
Retain
表示删除 PVC 后,PV 会保留数据并变为Released
状态,需手动清理后才能重新使用。
2. PVC 配置(pvc.yml
)
PVC 名称 | 请求容量 | 访问模式 | 存储类 | 匹配逻辑(与 PV) |
---|---|---|---|---|
pvc1 | 1Gi | ReadWriteOnce | storageClassName: nfs | 匹配同存储类、同访问模式且容量 ≥1Gi 的 PV(pv1 符合) |
pvc2 | 10Gi | ReadWriteMany | storageClassName: nfs | 匹配同存储类、同访问模式且容量 ≥10Gi 的 PV(pv2 符合) |
pvc3 | 15Gi | ReadOnlyMany | storageClassName: nfs | 匹配同存储类、同访问模式且容量 ≥15Gi 的 PV(pv3 符合) |
- PVC 与 PV 的匹配规则:
- 必须拥有相同的
storageClassName
- PVC 的
accessModes
必须是 PV 支持的访问模式的子集 - PV 的容量必须 ≥ PVC 请求的容量
- 必须拥有相同的
cpp
#在nfs主机中建立实验目录
[root@harbor harbor]# mkdir /nfsdata/pv{1..3}
[root@harbor harbor]# ls /nfsdata/
index.html pv1 pv2 pv3
#编写创建pv的yml文件,pv是集群资源,不在任何namespace中
在master编写
[root@master testfile]# vim pv.yml
[root@master testfile]# cat pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 172.25.254.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
spec:
capacity:
storage: 15Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv2
server: 172.25.254.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
spec:
capacity:
storage: 25Gi
volumeMode: Filesystem
accessModes:
- ReadOnlyMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs
nfs:
path: /nfsdata/pv3
server: 172.25.254.200
[root@master testfile]# kubectl apply -f pv.yml
persistentvolume/pv1 created
persistentvolume/pv2 created
persistentvolume/pv3 created
[root@master testfile]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv1 5Gi RWO Retain Available nfs <unset> 7s
pv2 15Gi RWX Retain Available nfs <unset> 7s
pv3 25Gi ROX Retain Available nfs <unset> 7s
#建立pvc,pvc是pv使用的申请,需要保证和pod在一个namesapce中
[root@master testfile]# cat pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc1
spec:
storageClassName: nfs
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
storageClassName: nfs
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
spec:
storageClassName: nfs
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 15Gi
[root@master testfile]# kubectl apply -f pvc.yml
persistentvolumeclaim/pvc1 created
persistentvolumeclaim/pvc2 created
persistentvolumeclaim/pvc3 created
[root@master testfile]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pv1 5Gi RWO nfs <unset> 31s
pvc2 Bound pv2 15Gi RWX nfs <unset> 31s
pvc3 Bound pv3 25Gi ROX nfs <unset> 31s
[root@master testfile]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pv1 5Gi RWO Retain Bound default/pvc1 nfs <unset> 4m
pv2 15Gi RWX Retain Bound default/pvc2 nfs <unset> 4m
pv3 25Gi ROX Retain Bound default/pvc3 nfs <unset> 4m
#在其他namespace中无法应用
[root@master testfile]# kubectl -n kube-system get pvc
No resources found in kube-system namespace.
在pod中使用pvc
cpp
[root@master testfile]# vim pod-1.yml
[root@master testfile]# cat pod-1.yml
apiVersion: v1
kind: Pod
metadata:
name: rin
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: /usr/share/nginx/html
name: vol1
volumes:
- name: vol1
persistentVolumeClaim:
claimName: pvc1
[root@master testfile]# kubectl apply -f pod-1.yml
pod/rin created
[root@master testfile]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myapp-v1-7479d6c54d-djpv5 1/1 Running 0 5h54m 10.244.1.2 node1.rin.com <none> <none>
myapp-v2-7cd6d597d-l4nz4 1/1 Running 0 5h54m 10.244.1.5 node1.rin.com <none> <none>
rin 1/1 Running 0 17s 10.244.2.10 node2.rin.com <none> <none>
vol1 1/1 Running 0 3h8m 10.244.2.9 node2.rin.com <none> <none>
web-0 1/1 Running 0 5h30m 10.244.2.2 node2.rin.com <none> <none>
web-1 1/1 Running 0 5h29m 10.244.1.6 node1.rin.com <none> <none>
来到nfs进行编辑:
[root@harbor harbor]# echo rinleren > /data/pv1/index.html
再回到master:
[root@master testfile]# kubectl exec -it pods/rin -- /bin/bash
root@rin:/# cd /usr/share/nginx/html/
root@rin:/usr/share/nginx/html# ls
index.html
root@rin:/usr/share/nginx/html# curl localhost
rinleren
存储类storageclass
StorageClass的属性
属性说明:存储类 | Kubernetes
Provisioner(存储分配器):用来决定使用哪个卷插件分配 PV,该字段必须指定。可以指定内部分配器,也可以指定外部分配器。外部分配器的代码地址为: kubernetes-incubator/external-storage,其中包括NFS和Ceph等。
Reclaim Policy(回收策略):通过reclaimPolicy字段指定创建的Persistent Volume的回收策略,回收策略包括:Delete 或者 Retain,没有指定默认为Delete。
存储分配器NFS Client Provisioner
源码地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
-
NFS Client Provisioner是一个automatic provisioner,使用NFS作为存储,自动创建PV和对应的PVC,本身不提供NFS存储,需要外部先有一套NFS存储服务。
-
PV以 {namespace}-{pvcName}-${pvName}的命名格式提供(在NFS服务器上)
-
PV回收的时候以 archieved-{namespace}-{pvcName}-${pvName} 的命名格式(在NFS服务器上)
部署NFS Client Provisioner
创建sa并授权
cpp
[root@master packages]# vim rbac.yml
[root@master packages]# cat rbac.yml
apiVersion: v1
kind: Namespace
metadata:
name: nfs-client-provisioner
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-client-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
#查看rbac信息
[root@master packages]# kubectl apply -f rbac.yml
namespace/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
[root@master packages]# kubectl -n nfs-client-provisioner get sa
NAME SECRETS AGE
default 0 16s
nfs-client-provisioner 0 16s
部署应用
创项目仓库:

cpp
#推送镜像
docker tag registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 www.rin.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[root@master packages]# docker push www.rin.com/sig-storage/nfs-subdir-external-provisioner:v4.0.2
[root@master packages]# vim deployment.yml
[root@master packages]# cat deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 172.25.254.200
- name: NFS_PATH
value: /nfsdata
volumes:
- name: nfs-client-root
nfs:
server: 172.25.254.200
path: /nfsdata
[root@master packages]# kubectl apply -f deployment.yml
deployment.apps/nfs-client-provisioner created
[root@master packages]# kubectl -n nfs-client-provisioner get deployments.apps nfs-client-provisioner
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 13s
创建存储类
cpp
[root@master packages]# vim class.yaml
[root@master packages]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs-client
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
archiveOnDelete: "false"
[root@master packages]# kubectl apply -f class.yaml
storageclass.storage.k8s.io/nfs-client created
[root@master packages]# kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 5s
创建pvc
cpp
[root@master packages]# vim pvc.yml
[root@master packages]# cat pvc.yml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1G
[root@master packages]# kubectl apply -f pvc.yml
persistentvolumeclaim/test-claim created
[root@master packages]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
pvc1 Bound pv1 5Gi RWO nfs <unset> 42m
pvc2 Bound pv2 15Gi RWX nfs <unset> 42m
pvc3 Bound pv3 25Gi ROX nfs <unset> 42m
test-claim Bound pvc-03939cb4-8f61-4b38-9c55-9d3fe412b411 1G RWX nfs-client <unset> 8s
来到nfs存储目录查看存储情况:
[root@harbor /]# ls /nfsdata/
default-test-claim-pvc-03939cb4-8f61-4b38-9c55-9d3fe412b411 index.html pv1 pv2 pv3
如果将pvc删除,存储的内容也会相应删除:
[root@master packages]# kubectl delete -f pvc.yml
persistentvolumeclaim "test-claim" deleted
[root@harbor /]# ls /nfsdata/
index.html pv1 pv2 pv3
statefulset控制器
功能特性
-
Statefulset是为了管理有状态服务的问提设计的
-
StatefulSet将应用状态抽象成了两种情况:
-
拓扑状态:应用实例必须按照某种顺序启动。新创建的Pod必须和原来Pod的网络标识一样
-
存储状态:应用的多个实例分别绑定了不同存储数据。
-
StatefulSet给所有的Pod进行了编号,编号规则是:(statefulset名称)-(序号),从0开始。
-
Pod被删除后重建,重建Pod的网络标识也不会改变,Pod的拓扑状态按照Pod的"名字+编号"的方式固定下来,并且为每个Pod提供了一个固定且唯一的访问入口,Pod对应的DNS记录。
StatefulSet的组成部分
-
Headless Service:用来定义pod网络标识,生成可解析的DNS记录
-
volumeClaimTemplates:创建pvc,指定pvc名称大小,自动创建pvc且pvc由存储类供应。
-
StatefulSet:管理pod的
构建方法
cpp
#建立无头服务
[root@master packages]# vim headless.yml
[root@master packages]# cat headless.yml
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
[root@master packages]# kubectl apply -f headless.yml
service/nginx-svc created
#建立statefulset
[root@master packages]# vim statefulset.yml
[root@master packages]# cat statefulset.yml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx-svc"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
[root@master packages]# kubectl apply -f statefulset.yml
statefulset.apps/web created
[root@master packages]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 19s
web-1 1/1 Running 0 16s
web-2 0/1 ContainerCreating 0 3s
[root@harbor /]# ls /nfsdata/
default-www-web-0-pvc-579542e2-ea9b-4962-a266-7613e4abd231
default-www-web-1-pvc-9d6261cf-d191-4b53-b0ff-742349f37360
default-www-web-2-pvc-e4c70a90-fe1e-44b7-8760-2e45df6bf28a
测试:
cpp
#为每个pod建立index.html文件
[root@harbor nfsdata]# echo web-0 > default-www-web-0-pvc-579542e2-ea9b-4962-a266-7613e4abd231/index.html
[root@harbor nfsdata]# echo web-1 > default-www-web-1-pvc-9d6261cf-d191-4b53-b0ff-742349f37360/index.html
[root@harbor nfsdata]# echo web-2 > default-www-web-2-pvc-e4c70a90-fe1e-44b7-8760-2e45df6bf28a/index.html
#建立测试pod访问web-0~2
[root@master packages]# kubectl run -it testpod --image busyboxplus
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
#删掉重新建立statefulset
[root@master testfile]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@master testfile]# kubectl apply -f statefulset.yml
statefulset.apps/web created
#访问依然不变
[root@master testfile]# kubectl attach testpod -c testpod -i -t
If you don't see a command prompt, try pressing enter.
/ # curl web-0.nginx-svc
web-0
/ # curl web-1.nginx-svc
web-1
/ # curl web-2.nginx-svc
web-2
statefulset的弹缩
首先,想要弹缩的StatefulSet. 需先清楚是否能弹缩该应用
用命令改变副本数
bash
$ kubectl scale statefulsets <stateful-set-name> --replicas=<new-replicas>
通过编辑配置改变副本数
bash
$ kubectl edit statefulsets.apps <stateful-set-name>
#statefulset有序回收
cpp
[root@master testfile]# kubectl scale statefulset web --replicas 0
statefulset.apps/web scaled
[root@master testfile]# kubectl delete -f statefulset.yml
statefulset.apps "web" deleted
[root@master testfile]# kubectl delete pvc --all
persistentvolumeclaim "pvc1" deleted
persistentvolumeclaim "pvc2" deleted
persistentvolumeclaim "pvc3" deleted
persistentvolumeclaim "www-web-0" deleted
persistentvolumeclaim "www-web-1" deleted
persistentvolumeclaim "www-web-2" deleted
k8s网络通信
k8s通信整体架构
-
k8s通过CNI接口接入其他插件来实现网络通讯。目前比较流行的插件有flannel,calico等
-
CNI插件存放位置:# cat /etc/cni/net.d/10-flannel.conflist
-
插件使用的解决方案如下
-
虚拟网桥,虚拟网卡,多个容器共用一个虚拟网卡进行通信。
-
多路复用:MacVLAN,多个容器共用一个物理网卡进行通信。
-
硬件交换:SR-LOV,一个物理网卡可以虚拟出多个接口,这个性能最好。
-
-
容器间通信:
-
同一个pod内的多个容器间的通信,通过lo即可实现pod之间的通信
-
同一节点的pod之间通过cni网桥转发数据包。
-
不同节点的pod之间的通信需要网络插件支持
-
-
pod和service通信: 通过iptables或ipvs实现通信,ipvs取代不了iptables,因为ipvs只能做负载均衡,而做不了nat转换
-
pod和外网通信:iptables的MASQUERADE
-
Service与集群外部客户端的通信;(ingress、nodeport、loadbalancer)
flannel网络插件
插件组成:
插件 | 功能 |
---|---|
VXLAN | 即Virtual Extensible LAN(虚拟可扩展局域网),是Linux本身支持的一网种网络虚拟化技术。VXLAN可以完全在内核态实现封装和解封装工作,从而通过"隧道"机制,构建出覆盖网络(Overlay Network) |
VTEP | VXLAN Tunnel End Point(虚拟隧道端点),在Flannel中 VNI的默认值是1,这也是为什么宿主机的VTEP设备都叫flannel.1的原因 |
Cni0 | 网桥设备,每创建一个pod都会创建一对 veth pair。其中一端是pod中的eth0,另一端是Cni0网桥中的端口(网卡) |
Flannel.1 | TUN设备(虚拟网卡),用来进行 vxlan 报文的处理(封包和解包)。不同node之间的pod数据流量都从overlay设备以隧道的形式发送到对端 |
Flanneld | flannel在每个主机中运行flanneld作为agent,它会为所在主机从集群的网络地址空间中,获取一个小的网段subnet,本主机内所有容器的IP地址都将从中分配。同时Flanneld监听K8s集群数据库,为flannel.1设备提供封装数据时必要的mac、ip等网络数据信息 |
flannel跨主机通信原理

-
当容器发送IP包,通过veth pair 发往cni网桥,再路由到本机的flannel.1设备进行处理。
-
VTEP设备之间通过二层数据帧进行通信,源VTEP设备收到原始IP包后,在上面加上一个目的MAC地址,封装成一个内部数据帧,发送给目的VTEP设备。
-
内部数据桢,并不能在宿主机的二层网络传输,Linux内核还需要把它进一步封装成为宿主机的一个普通的数据帧,承载着内部数据帧通过宿主机的eth0进行传输。
-
Linux会在内部数据帧前面,加上一个VXLAN头,VXLAN头里有一个重要的标志叫VNI,它是VTEP识别某个数据桢是不是应该归自己处理的重要标识。
-
flannel.1设备只知道另一端flannel.1设备的MAC地址,却不知道对应的宿主机地址是什么。在linux内核里面,网络设备进行转发的依据,来自FDB的转发数据库,这个flannel.1网桥对应的FDB信息,是由flanneld进程维护的。
-
linux内核在IP包前面再加上二层数据帧头,把目标节点的MAC地址填进去,MAC地址从宿主机的ARP表获取。
-
此时flannel.1设备就可以把这个数据帧从eth0发出去,再经过宿主机网络来到目标节点的eth0设备。目标主机内核网络栈会发现这个数据帧有VXLAN Header,并且VNI为1,Linux内核会对它进行拆包,拿到内部数据帧,根据VNI的值,交给本机flannel.1设备处理,flannel.1拆包,根据路由表发往cni网桥,最后到达目标容器。
cpp
# 默认网络通信路由
[root@master ~]# ip r
default via 172.25.254.2 dev eth0 proto static metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.0/24 via 10.244.2.0 dev flannel.1 onlink
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
172.25.254.0/24 dev eth0 proto kernel scope link src 172.25.254.100 metric 100
# 桥接转发数据库
[root@master ~]# bridge fdb
01:00:5e:00:00:01 dev eth0 self permanent
33:33:00:00:00:01 dev eth0 self permanent
01:00:5e:00:00:fb dev eth0 self permanent
33:33:ff:26:8c:66 dev eth0 self permanent
33:33:00:00:00:fb dev eth0 self permanent
33:33:00:00:00:01 dev docker0 self permanent
01:00:5e:00:00:6a dev docker0 self permanent
33:33:00:00:00:6a dev docker0 self permanent
01:00:5e:00:00:01 dev docker0 self permanent
01:00:5e:00:00:fb dev docker0 self permanent
02:42:29:42:60:15 dev docker0 vlan 1 master docker0 permanent
02:42:29:42:60:15 dev docker0 master docker0 permanent
3a:fb:ed:99:25:d2 dev flannel.1 dst 172.25.254.20 self permanent
0e:78:84:a8:2f:40 dev flannel.1 dst 172.25.254.10 self permanent
33:33:00:00:00:01 dev cni0 self permanent
01:00:5e:00:00:6a dev cni0 self permanent
33:33:00:00:00:6a dev cni0 self permanent
01:00:5e:00:00:01 dev cni0 self permanent
33:33:ff:cd:7b:34 dev cni0 self permanent
01:00:5e:00:00:fb dev cni0 self permanent
33:33:00:00:00:fb dev cni0 self permanent
5e:e8:e3:cd:7b:34 dev cni0 vlan 1 master cni0 permanent
5e:e8:e3:cd:7b:34 dev cni0 master cni0 permanent
fa:f5:a5:ad:48:fd dev veth847a57bd master cni0
62:e9:33:b3:60:1f dev veth847a57bd vlan 1 master cni0 permanent
62:e9:33:b3:60:1f dev veth847a57bd master cni0 permanent
33:33:00:00:00:01 dev veth847a57bd self permanent
01:00:5e:00:00:01 dev veth847a57bd self permanent
33:33:ff:b3:60:1f dev veth847a57bd self permanent
33:33:00:00:00:fb dev veth847a57bd self permanent
de:4a:63:5c:97:30 dev veth237d7278 master cni0
62:94:ce:11:2f:f5 dev veth237d7278 vlan 1 master cni0 permanent
62:94:ce:11:2f:f5 dev veth237d7278 master cni0 permanent
33:33:00:00:00:01 dev veth237d7278 self permanent
01:00:5e:00:00:01 dev veth237d7278 self permanent
33:33:ff:11:2f:f5 dev veth237d7278 self permanent
33:33:00:00:00:fb dev veth237d7278 self permanent
# arp列表
[root@master ~]# arp -n
Address HWtype HWaddress Flags Mask Iface
10.244.0.2 ether fa:f5:a5:ad:48:fd C cni0
172.25.254.20 ether 00:0c:29:d3:f3:c3 C eth0
172.25.254.2 ether 00:50:56:ed:f3:75 C eth0
172.25.254.10 ether 00:0c:29:b2:d9:39 C eth0
172.25.254.1 ether 00:50:56:c0:00:08 C eth0
10.244.1.0 ether 0e:78:84:a8:2f:40 CM flannel.1
10.244.0.3 ether de:4a:63:5c:97:30 C cni0
10.244.2.0 ether 3a:fb:ed:99:25:d2 CM flannel.1
alico网络插件
官网:
Installing on on-premises deployments | Calico Documentation
calico简介:
-
纯三层的转发,中间没有任何的NAT和overlay,转发效率最好。
-
Calico 仅依赖三层路由可达。Calico 较少的依赖性使它能适配所有 VM、Container、白盒或者混合环境场景。
calico网络架构

-
Felix:监听ECTD中心的存储获取事件,用户创建pod后,Felix负责将其网卡、IP、MAC都设置好,然后在内核的路由表里面写一条,注明这个IP应该到这张网卡。同样如果用户制定了隔离策略,Felix同样会将该策略创建到ACL中,以实现隔离。
-
BIRD:一个标准的路由程序,它会从内核里面获取哪一些IP的路由发生了变化,然后通过标准BGP的路由协议扩散到整个其他的宿主机上,让外界都知道这个IP在这里,路由的时候到这里
部署calico
删除flannel插件
cpp
[root@master k8s-tools]# kubectl delete -f kube-flannel.yml
namespace "kube-flannel" deleted
serviceaccount "flannel" deleted
clusterrole.rbac.authorization.k8s.io "flannel" deleted
clusterrolebinding.rbac.authorization.k8s.io "flannel" deleted
configmap "kube-flannel-cfg" deleted
daemonset.apps "kube-flannel-ds" deleted
删除所有节点上flannel配置文件,避免冲突
cpp
[root@master k8s-tools]# rm -rf /etc/cni/net.d/10-flannel.conflist
[root@node1 ~]# rm -rf /etc/cni/net.d/10-flannel.conflist
[root@node2 ~]# rm -rf /etc/cni/net.d/10-flannel.conflist
下载部署文件
bash
[root@k8s-master calico]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.1/manifests/calico-typha.yaml -o calico.yaml
加载镜像:
cpp
[root@master calico]# docker load -i calico-3.28.1.tar
6b2e64a0b556: Loading layer [==================================================>] 3.69MB/3.69MB
38ba74eb8103: Loading layer [==================================================>] 205.4MB/205.4MB
5f70bf18a086: Loading layer [==================================================>] 1.024kB/1.024kB
Loaded image: calico/cni:v3.28.1
3831744e3436: Loading layer [==================================================>] 366.9MB/366.9MB
Loaded image: calico/node:v3.28.1
4f27db678727: Loading layer [==================================================>] 75.59MB/75.59MB
Loaded image: calico/kube-controllers:v3.28.1
993f578a98d3: Loading layer [==================================================>] 67.61MB/67.61MB
Loaded image: calico/typha:v3.28.1
创建项目仓库目录:

下载镜像上传至仓库:
cpp
[root@master calico]# docker tag calico/cni:v3.28.1 www.rin.com/calico/cni:v3.28.1
[root@master calico]# docker tag calico/node:v3.28.1 www.rin.com/calico/node:v3.28.1
[root@master calico]# docker tag calico/kube-controllers:v3.28.1 www.rin.com/calico/kube-controllers:v3.28.1
[root@master calico]# docker tag calico/typha:v3.28.1 www.rin.com/calico/typha:v3.28.1
[root@master calico]# docker push www.rin.com/calico/cni:v3.28.1
The push refers to repository [www.rin.com/calico/cni]
5f70bf18a086: Mounted from ingress-nginx/controller
38ba74eb8103: Pushed
6b2e64a0b556: Pushed
v3.28.1: digest: sha256:4bf108485f738856b2a56dbcfb3848c8fb9161b97c967a7cd479a60855e13370 size: 946
[root@master calico]# docker push www.rin.com/calico/node:v3.28.1
The push refers to repository [www.rin.com/calico/node]
3831744e3436: Pushed
v3.28.1: digest: sha256:f72bd42a299e280eed13231cc499b2d9d228ca2f51f6fd599d2f4176049d7880 size: 530
[root@master calico]# docker push www.rin.com/calico/kube-controllers:v3.28.1
The push refers to repository [www.rin.com/calico/kube-controllers]
4f27db678727: Pushed
6b2e64a0b556: Mounted from calico/cni
v3.28.1: digest: sha256:8579fad4baca75ce79644db84d6a1e776a3c3f5674521163e960ccebd7206669 size: 740
[root@master calico]# docker push www.rin.com/calico/typha:v3.28.1
The push refers to repository [www.rin.com/calico/typha]
993f578a98d3: Pushed
6b2e64a0b556: Mounted from calico/kube-controllers
v3.28.1: digest: sha256:093ee2e785b54c2edb64dc68c6b2186ffa5c47aba32948a35ae88acb4f30108f size: 740
更改yml设置
cpp
批量替换镜像路径:
%s/image: docker\.io\/calico\//image: calico\//g
[root@master calico]# grep -n "image:" calico.yaml
4835: image: www.rin.com/ calico/cni:v3.28.1
4863: image: www.rin.com/calico/cni:v3.28.1
4906: image: www.rin.com/calico/node:v3.28.1
4932: image: www.rin.com/calico/node:v3.28.1
5160: image: www.rin.com/calico/kube-controllers:v3.28.1
5249: - image: www.rin.com/calico/typha:v3.28.1
4970 - name: CALICO_IPV4POOL_IPIP
4971 value: "Never"
4999 - name: CALICO_IPV4POOL_CIDR
5000 value: "10.244.0.0/16"
5001 - name: CALICO_AUTODETECTION_METHOD
5002 value: "interface=eth0"
[root@master calico]# kubectl apply -f calico.yaml
[root@master calico]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-858ff477c9-fz447 0/1 Running 2 (16s ago) 97s
calico-node-cfwnm 0/1 Init:InvalidImageName 0 97s
calico-node-l8wj2 0/1 Init:InvalidImageName 0 97s
calico-node-nkrwz 0/1 Init:InvalidImageName 0 97s
calico-typha-999659975-5wkbb 1/1 Running 0 97s
coredns-554b7d9f44-vtfv7 1/1 Running 4 (43h ago) 5d1h
coredns-554b7d9f44-x7nhg 1/1 Running 4 (43h ago) 5d1h
etcd-master.rin.com 1/1 Running 4 (43h ago) 5d1h
kube-apiserver-master.rin.com 1/1 Running 4 (43h ago) 5d1h
kube-controller-manager-master.rin.com 1/1 Running 4 (43h ago) 5d1h
kube-proxy-45wxm 1/1 Running 4 (43h ago) 5d
kube-proxy-5b7fm 1/1 Running 3 (43h ago) 5d
kube-proxy-jrdvn 1/1 Running 3 (43h ago) 5d
kube-scheduler-master.rin.com 1/1 Running 4 (43h ago) 5d1h
测试:
cpp
[root@master calico]# kubectl run web --image myapp:v1
pod/web created
[root@master calico]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web 1/1 Running 0 5s 10.244.161.0 node2.rin.com <none> <none>
[root@master calico]# curl 10.244.161.0
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
k8s调度(Scheduling)
调度在Kubernetes中的作用
-
调度是指将未调度的Pod自动分配到集群中的节点的过程
-
调度器通过 kubernetes 的 watch 机制来发现集群中新创建且尚未被调度到 Node 上的 Pod
-
调度器会将发现的每一个未调度的 Pod 调度到一个合适的 Node 上来运行
调度原理:
-
创建Pod
-
用户通过Kubernetes API创建Pod对象,并在其中指定Pod的资源需求、容器镜像等信息。
-
调度器监视Pod
-
Kubernetes调度器监视集群中的未调度Pod对象,并为其选择最佳的节点。
-
选择节点
-
调度器通过算法选择最佳的节点,并将Pod绑定到该节点上。调度器选择节点的依据包括节点的资源使用情况、Pod的资源需求、亲和性和反亲和性等。
-
绑定Pod到节点
-
调度器将Pod和节点之间的绑定信息保存在etcd数据库中,以便节点可以获取Pod的调度信息。
-
节点启动Pod
-
节点定期检查etcd数据库中的Pod调度信息,并启动相应的Pod。如果节点故障或资源不足,调度器会重新调度Pod,并将其绑定到其他节点上运行。
调度器种类
-
默认调度器(Default Scheduler):
-
是Kubernetes中的默认调度器,负责对新创建的Pod进行调度,并将Pod调度到合适的节点上。
-
自定义调度器(Custom Scheduler):
-
是一种自定义的调度器实现,可以根据实际需求来定义调度策略和规则,以实现更灵活和多样化的调度功能。
-
扩展调度器(Extended Scheduler):
-
是一种支持调度器扩展器的调度器实现,可以通过调度器扩展器来添加自定义的调度规则和策略,以实现更灵活和多样化的调度功能。
-
kube-scheduler是kubernetes中的默认调度器,在kubernetes运行后会自动在控制节点运行
常用调度方法
nodename
-
nodeName 是节点选择约束的最简单方法,但一般不推荐
-
如果 nodeName 在 PodSpec 中指定了,则它优先于其他的节点选择方法
-
使用 nodeName 来选择节点的一些限制
-
如果指定的节点不存在。
-
如果指定的节点没有资源来容纳 pod,则pod 调度失败。
-
云环境中的节点名称并非总是可预测或稳定的
cpp
#建立pod文件
kubectl run testpod --image myapp:v1 --dry-run=client -o yaml > pod1.yml
#设置调度
[root@master nodename]# cat pod1.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
nodeName: node1.rin.com
containers:
- image: myapp:v1
name: testpod
#建立pod
[root@master nodename]# kubectl apply -f pod1.yml
pod/testpod created
[root@master nodename]# kubectl get pod
NAME READY STATUS RESTARTS AGE
testpod 1/1 Running 0 23s
nodeName: k8s3 #找不到节点pod会出现pending,优先级最高,其他调度方式无效
Nodeselector(通过标签控制节点)
-
nodeSelector 是节点选择约束的最简单推荐形式
-
给选择的节点添加标签:
bash
kubectl label nodes node1.rin.com lab=rinleren
可以给多个节点设定相同标签,
cpp
#查看节点标签
[root@master nodename]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
master.rin.com Ready control-plane 5d3h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master.rin.com,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=
node1.rin.com Ready <none> 5d3h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.rin.com,kubernetes.io/os=linux
node2.rin.com Ready <none> 5d3h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node2.rin.com,kubernetes.io/os=linux
#设定节点标签
[root@master nodename]# kubectl label nodes node1.rin.com lab=rinleren
node/node1.rin.com labeled
[root@master nodename]# kubectl get nodes node1.rin.com --show-labels
NAME STATUS ROLES AGE VERSION LABELS
node1.rin.com Ready <none> 5d3h v1.30.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node1.rin.com,kubernetes.io/os=linux,lab=rinleren
#调度设置
[root@master nodename]# cat pod2.yml
apiVersion: v1
kind: Pod
metadata:
labels:
run: testpod
name: testpod
spec:
nodeSelector:
lab: rinleren
containers:
- image: myapp:v1
name: testpod
[root@master nodename]# kubectl apply -f pod2.yml
pod/testpod created
[root@master nodename]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
testpod 1/1 Running 0 4s 10.244.184.66 node1.rin.com <none> <none>
web 1/1 Running 0 82m 10.244.161.0 node2.rin.com <none> <none>
节点标签可以给N个节点加
affinity(亲和性)
官方文档 :
亲和与反亲和
-
nodeSelector 提供了一种非常简单的方法来将 pod 约束到具有特定标签的节点上。亲和/反亲和功能极大地扩展了你可以表达约束的类型。
-
使用节点上的 pod 的标签来约束,而不是使用节点本身的标签,来允许哪些 pod 可以或者不可以被放置在一起。
nodeAffinity节点亲和
-
个节点服务指定条件就在那个节点运行
-
requiredDuringSchedulingIgnoredDuringExecution 必须满足,但不会影响已经调度
-
preferredDuringSchedulingIgnoredDuringExecution 倾向满足,在无法满足情况下也会调度pod
- IgnoreDuringExecution 表示如果在Pod运行期间Node的标签发生变化,导致亲和性策略不能满足,则继续运行当前的Pod。
-
nodeaffinity还支持多种规则匹配条件的配置如
匹配规则 | 功能 |
---|---|
ln | label 的值在列表内 |
Notln | label 的值不在列表内 |
Gt | label 的值大于设置的值,不支持Pod亲和性 |
Lt | label 的值小于设置的值,不支持pod亲和性 |
Exists | 设置的label 存在 |
DoesNotExist | 设置的 label 不存在 |
示例:
cpp
[root@master nodename]# vim pod3.yml
[root@master nodename]# cat pod3.yml
apiVersion: v1
kind: Pod
metadata:
name: node-affinity
spec:
containers:
- name: nginx
image: nginx
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disk
operator: In | NotIn
values:
- ssd
Podaffinity(pod的亲和)
-
那个节点有符合条件的POD就在那个节点运行
-
podAffinity 主要解决POD可以和哪些POD部署在同一个节点中的问题
-
podAntiAffinity主要解决POD不能和哪些POD部署在同一个节点中的问题。它们处理的是Kubernetes集群内部POD和POD之间的关系。
-
Pod 间亲和与反亲和在与更高级别的集合(例如 ReplicaSets,StatefulSets,Deployments 等)一起使用时,
-
Pod 间亲和与反亲和需要大量的处理,这可能会显著减慢大规模集群中的调度。
Podaffinity示例
cpp
[root@master nodename]# vim example4.yml
[root@master nodename]# cat example4.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
[root@master nodename]# kubectl apply -f example4.yml
deployment.apps/nginx-deployment created
[root@master nodename]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-658496fff-btx4g 1/1 Running 0 6s 10.244.161.2 node2.rin.com <none> <none>
nginx-deployment-658496fff-xns28 1/1 Running 0 6s 10.244.161.4 node2.rin.com <none> <none>
nginx-deployment-658496fff-zcvsv 1/1 Running 0 6s 10.244.161.3 node2.rin.com <none> <none>
testpod 1/1 Running 0 18m 10.244.184.66 node1.rin.com <none> <none>
web 1/1 Running 0 100m 10.244.161.0 node2.rin.com <none> <none>
Podantiaffinity(pod反亲和)
Podantiaffinity示例
cpp
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: "kubernetes.io/hostname"
[root@master nodename]# kubectl apply -f example5.yml
deployment.apps/nginx-deployment created
[root@master nodename]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-5f5fc7b8b9-nmcsk 0/1 Pending 0 6s <none> <none> <none> <none>
nginx-deployment-5f5fc7b8b9-pj6cw 1/1 Running 0 6s 10.244.161.6 node2.rin.com <none> <none>
nginx-deployment-5f5fc7b8b9-stl8p 1/1 Running 0 6s 10.244.184.69 node1.rin.com <none> <none>
testpod 1/1 Running 0 22m 10.244.184.66 node1.rin.com <none> <none>
web 1/1 Running 0 104m 10.244.161.0 node2.rin.com <none> <none>
Taints(污点模式,禁止调度)
-
Taints(污点)是Node的一个属性,设置了Taints后,默认Kubernetes是不会将Pod调度到这个Node上
-
Kubernetes如果为Pod设置Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去
-
可以使用命令 kubectl taint 给节点增加一个 taint:
bash
$ kubectl taint nodes <nodename> key=string:effect #命令执行方法
$ kubectl taint nodes node1 key=value:NoSchedule #创建
$ kubectl describe nodes server1 | grep Taints #查询
$ kubectl taint nodes node1 key- #删除
effect 值 | 通俗解释 |
---|---|
NoSchedule | 坚决不让新 POD 跑到有这个污点的节点上 |
PreferNoSchedule | 尽量别让新 POD 跑到有这个污点的节点上,特殊情况可以通融 |
NoExecute | 不仅新 POD 不能来,已经在这个节点上的 POD 也要被赶走 |
简单说就是:
- NoSchedule:"禁止进入"
- PreferNoSchedule:"最好别来"
- NoExecute:"立即离开,并且不准再来"
Taints示例
cpp
#建立控制器并运行
[root@master nodename]# vim example6.yml
[root@master nodename]# cat example6.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: web
name: web
spec:
replicas: 2
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: nginx
name: nginx
[root@master nodename]# kubectl apply -f example6.yml
deployment.apps/web created
[root@master nodename]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-q66hx 1/1 Running 0 6s 10.244.184.70 node1.rin.com <none> <none>
web-7c56dcdb9b-wr4sq 1/1 Running 0 6s 10.244.161.7 node2.rin.com
<none> <none>
#设定污点为NoSchedule
[root@master nodename]# kubectl taint node node1.rin.com name=rin:NoSchedule
node/node1.rin.com tainted
[root@master nodename]# kubectl describe nodes node1.rin.com | grep Tain
Taints: name=rin:NoSchedule
#控制器增加pod
[root@master nodename]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-q66hx 1/1 Running 0 95s 10.244.184.70 node1.rin.com <none> <none>
web-7c56dcdb9b-wr4sq 1/1 Running 0 95s 10.244.161.7 node2.rin.com <none> <none>
#再次进行调度,pod不会部署到node1上
[root@master nodename]# kubectl delete -f example6.yml
deployment.apps "web" deleted
[root@master nodename]# kubectl apply -f example6.yml
deployment.apps/web created
[root@master nodename]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7c56dcdb9b-6tqf4 0/1 ContainerCreating 0 2s <none> node2.rin.com <none> <none>
web-7c56dcdb9b-gg285 0/1 ContainerCreating 0 2s <none> node2.rin.com <none> <none>
#删除污点
[root@master nodename]# kubectl taint node node1.rin.com name-
node/node1.rin.com untainted
[root@master nodename]# kubectl describe node node1.rin.com | grep Taint
Taints: <none>
tolerations(污点容忍)
-
tolerations中定义的key、value、effect,要与node上设置的taint保持一直:
-
如果 operator 是 Equal ,则key与value之间的关系必须相等。
-
如果 operator 是 Exists ,value可以省略
-
如果不指定operator属性,则默认值为Equal。
-
-
还有两个特殊值:
-
当不指定key,再配合Exists 就能匹配所有的key与value ,可以容忍所有污点。
-
当不指定effect ,则匹配所有的effect
-
污点容忍示例:
cpp
三种容忍方式每次测试写一个即可
tolerations: #容忍所有污点
- operator: Exists
tolerations: #容忍effect为Noschedule的污点
- operator: Exists
effect: NoSchedule
tolerations: #容忍指定kv的NoSchedule污点
- key: nodetype
value: bad
effect: NoSchedule
#设定节点污点
[root@master nodename]# kubectl taint node node1.rin.com name=rin:NoExecute
node/node1.rin.com tainted
[root@master nodename]# kubectl taint node node2.rin.com nodetype=bad:NoSchedule
node/node2.rin.com tainted
测试:
[root@master nodename]# kubectl apply -f example7.yml
deployment.apps/web created
[root@master nodename]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
web-7f64586499-7vmmf 1/1 Running 0 16s 10.244.128.1 master.rin.com <none> <none>
web-7f64586499-cfkqv 1/1 Running 0 16s 10.244.161.14 node2.rin.com <none> <none>
web-7f64586499-km7vw 1/1 Running 0 16s 10.244.184.71 node1.rin.com <none> <none>
web-7f64586499-msmgq 1/1 Running 0 16s 10.244.161.13 node2.rin.com <none> <none>
web-7f64586499-rp5zj 1/1 Running 0 16s 10.244.184.72 node1.rin.com <none> <none>
web-7f64586499-vx5gs 1/1 Running 0 16s 10.244.128.0 master.rin.com <none> <none>
[root@master nodename]#
容忍度配置 | 通俗理解 |
---|---|
tolerations: - operator: Exists |
相当于 "无视所有限制",不管节点上有什么污点(不管是什么类型、什么键值),这个 Pod 都能在上面运行,不会被拒绝调度,也不会被赶走 |
tolerations: - operator: Exists effect: NoSchedule |
专门 "破解"" 禁止新 Pod 进入 " 的限制,只要节点的污点是 NoSchedule 类型(不管键和值是什么),这个 Pod 都能被调度上去 |
tolerations: - key: nodetype value: bad effect: NoSchedule |
只 "破解" 特定的限制,仅对 "键是 nodetype、值是 bad、效果是 NoSchedule" 的这个污点有效,其他污点仍然会限制这个 Pod |
cpp
删除污点,还原干净环境:
[root@master nodename]# kubectl taint node node1.rin.com name-
node/node1.rin.com untainted
[root@master nodename]# kubectl taint node node2.rin.com nodetype-
node/node2.rin.com untainted
kubernetes API 访问控制

Authentication(认证)
-
认证方式现共有8种,可以启用一种或多种认证方式,只要有一种认证方式通过,就不再进行其它方式的认证。通常启用X509 Client Certs和Service Accout Tokens两种认证方式。
-
Kubernetes集群有两类用户:由Kubernetes管理的Service Accounts (服务账户)和(Users Accounts) 普通账户。k8s中账号的概念不是我们理解的账号,它并不真的存在,它只是形式上存在。
Authorization(授权)
- 必须经过认证阶段,才到授权请求,根据所有授权策略匹配请求资源属性,决定允许或拒绝请求。授权方式现共有6种,AlwaysDeny、AlwaysAllow、ABAC、RBAC、Webhook、Node。默认集群强制开启RBAC。
Admission Control(准入控制)
- 用于拦截请求的一种方式,运行在认证、授权之后,是权限认证链上的最后一环,对请求API资源对象进行修改和校验。

层级 / 阶段 | 核心组件 / 操作 | 通俗解释 | 关键功能 |
---|---|---|---|
一、客户端层 | user(用户) | 运维 / 开发者,通过 kubectl 等工具发起命令(如创建 Pod、查询节点) |
发起集群操作请求 |
pod(容器) | 业务容器可能通过 API 访问集群信息(如获取自身配置) | 内部服务与集群交互 | |
scheduler(调度器) | 决定 Pod 应部署到哪个节点,需读取节点和 Pod 数据 | 调度决策依赖集群状态数据 | |
controller-manager(控制器) | 维持集群预期状态(如 Deployment 确保副本数),需频繁读写资源状态 | 自动修复异常,维持资源合规 | |
kubelet(节点代理) | 运行在每个节点,向 API Server 汇报节点状态(资源、健康度等) | 节点状态上报与 Pod 生命周期管理 | |
kube-proxy(网络代理) | 维护网络规则(如 Service 转发),同步 API Server 中的服务配置 | 实现集群网络通信与服务发现 | |
二、API Server | Authentication(认证) | 验证请求者身份("你是谁?") | 支持证书、Token、密码等方式,拒绝非法身份 |
Authorization(授权) | 检查身份权限("你能做什么?") | 基于 RBAC 等策略,限制操作范围(如只读、可修改特定资源) | |
Admission Control(准入控制) | 额外规则校验("操作是否合规?") | 如资源配额检查、自动挂载服务账号、强制拉取镜像等,拦截不合规请求 | |
三、etcd | 集群状态存储 | 保存所有核心资源的最终状态(Pod、Node、Service 等) | 提供数据一致性和高可用,是集群的 "唯一真相源" |
处理流程 | 完整链路 | 客户端请求 → 认证 → 授权 → 准入控制 → 读写 etcd → 返回结果 | 层层校验确保安全可控,最终通过 API Server 与 etcd 交互完成操作 |
UserAccount与ServiceAccount
-
用户账户是针对人而言的。 服务账户是针对运行在 pod 中的进程而言的。
-
用户账户是全局性的。 其名称在集群各 namespace 中都是全局唯一的,未来的用户资源不会做 namespace 隔离, 服务账户是 namespace 隔离的。
-
集群的用户账户可能会从企业数据库进行同步,其创建需要特殊权限,并且涉及到复杂的业务流程。 服务账户创建的目的是为了更轻量,允许集群用户为了具体的任务创建服务账户 ( 即权限最小化原则 )。
ServiceAccount
-
服务账户控制器(Service account controller)
-
服务账户管理器管理各命名空间下的服务账户
-
每个活跃的命名空间下存在一个名为 "default" 的服务账户
-
-
服务账户准入控制器(Service account admission controller)
-
相似pod中 ServiceAccount默认设为 default。
-
保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。
-
如果pod不包含ImagePullSecrets设置那么ServiceAccount中的ImagePullSecrets 被添加到pod中
-
将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中
-
将一个包含用于 API 访问的 token 的 volume 添加到 pod 中
-
ServiceAccount示例:
建立名字为admin的ServiceAccount
认证(在k8s中建立认证用户)
创建UserAccount
cpp
#建立证书
[root@master nodename]# cd /etc/kubernetes/pki/
[root@master pki]# ls
apiserver.crt apiserver.key ca.crt front-proxy-ca.crt front-proxy-client.key
apiserver-etcd-client.crt apiserver-kubelet-client.crt ca.key front-proxy-ca.key sa.key
apiserver-etcd-client.key apiserver-kubelet-client.key etcd front-proxy-client.crt sa.pub
[root@master pki]# openssl genrsa -out rin.key 2048
[root@master pki]# openssl req -new -key rin.key -out rin.csr -subj "/CN=rin"
[root@master pki]# openssl x509 -req -in rin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out rin.crt -days 365
[root@master pki]# openssl x509 -in rin.crt -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number:
54:5a:05:60:9f:82:e5:6e:83:84:0d:d5:a2:6e:06:3c:25:a8:18:44
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN = kubernetes
Validity
Not Before: Aug 19 07:41:02 2025 GMT
Not After : Aug 19 07:41:02 2026 GMT
Subject: CN = rin
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
Modulus:
00:d9:31:0f:42:ef:44:f2:88:c8:9a:b3:dc:2c:72:
31:58:58:f5:03:6c:65:ee:1a:a8:15:d0:0a:93:54:
9d:4f:35:67:03:00:50:7a:11:12:99:94:e3:12:07:
10:43:66:e4:b4:aa:cd:da:5f:04:2a:ba:60:48:fd:
20:0a:0b:cc:5c:ff:39:03:ca:5d:9f:ec:8f:a4:5e:
6f:18:fc:1e:20:fd:d4:2f:82:a6:f1:e5:3d:9d:fc:
05:cd:98:b9:ea:98:94:6e:cb:72:0a:9c:df:d9:ef:
75:39:c1:2e:f6:14:a8:bb:4b:e0:5c:b0:93:38:ce:
7c:6a:d8:da:68:bf:40:21:e7:63:b9:93:47:ed:fc:
26:9d:87:1e:1a:bd:c8:bb:3e:bf:f3:bd:8b:b5:f5:
73:e3:de:e9:b1:7d:5c:91:5d:e8:2e:f6:69:ef:77:
0f:22:c8:5e:75:b7:88:d7:24:cc:ef:dc:41:8f:e4:
18:df:f4:06:6c:49:64:75:02:b9:50:99:d3:fb:20:
d9:1f:ed:f0:c9:47:8d:b9:3c:4d:0a:c5:87:8a:ae:
51:8e:2e:2b:6e:ed:45:28:8d:4c:3e:38:69:68:f9:
af:bc:53:0d:0a:60:f2:31:19:84:d2:9d:69:64:f2:
67:77:03:11:32:5c:05:78:78:73:10:c3:45:07:4f:
4b:c9
Exponent: 65537 (0x10001)
Signature Algorithm: sha256WithRSAEncryption
Signature Value:
b2:cc:52:5d:02:a3:f8:91:ee:6b:5f:ad:1a:0e:01:9b:36:de:
4e:12:ed:ab:55:a8:0f:13:12:bd:c3:e9:6a:b2:b3:d4:80:a7:
d2:32:7a:e5:ef:cb:68:33:c9:c2:54:59:9a:8a:1d:70:c1:a3:
5f:9b:5c:05:57:7e:73:49:59:7e:1d:04:97:10:de:32:52:75:
44:80:3d:07:12:af:0c:f7:fd:1b:10:4f:ee:2d:5c:de:f5:23:
1c:cd:46:e5:f7:33:34:3b:5c:9a:4f:e0:13:de:e4:8a:80:a1:
09:f3:fb:19:b9:d5:4e:eb:e7:d8:a8:a9:32:7e:3d:c0:69:d7:
9b:dc:4b:62:41:cc:76:a9:dc:5c:6a:e8:81:c7:5e:8d:09:89:
61:ca:40:e4:61:3b:a4:fa:b0:e9:41:e5:46:b4:46:ec:de:1b:
41:fa:da:13:4d:9c:cd:9d:18:84:18:51:ac:37:4b:5d:46:e3:
51:af:da:58:40:a3:87:59:b8:15:a0:d8:42:b3:fa:7f:75:44:
b5:38:e7:cf:22:21:d1:3a:cf:34:75:ff:5f:39:25:fb:b5:97:
b6:a2:e2:48:d3:57:25:c2:9d:97:dc:d4:3d:8f:46:bd:16:a5:
6e:15:2b:05:f1:2e:04:d4:43:2b:8e:37:de:a9:10:5b:72:75:
4f:a7:b0:b2
cpp
#建立k8s中的用户
[root@master pki]# kubectl config set-credentials rin --client-certificate /etc/kubernetes/pki/rin.crt --client-key /etc/kubernetes/pki/rin.key --embed-certs=true
User "rin" set.
[root@master pki]# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://172.25.254.100:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
- name: rin
user:
client-certificate-data: DATA+OMITTED
client-key-data: DATA+OMITTED
#为用户创建集群的安全上下文
[root@master pki]# kubectl config set-context rin@kubernetes --cluster kubernetes --user rin
Context "rin@kubernetes" created.
#切换用户,用户在集群中只有用户身份没有授权
[root@master pki]# kubectl config use-context rin@kubernetes
Switched to context "rin@kubernetes".
[root@master pki]# kubectl get pods
Error from server (Forbidden): pods is forbidden: User "rin" cannot list resource "pods" in API group "" in the namespace "default"
#切换回集群管理身份
[root@master pki]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
[root@master pki]# kubectl get pods
No resources found in default namespace.
#如果需要删除用户
kubectl config delete-user rin
RBAC(Role Based Access Control)
基于角色访问控制授权:

-
允许管理员通过Kubernetes API动态配置授权策略。RBAC就是用户通过角色与权限进行关联。
-
RBAC只有授权,没有拒绝授权,所以只需要定义允许该用户做什么即可
-
RBAC的三个基本概念
- Subject:被作用者,它表示k8s中的三类主体, user, group, serviceAccount
- Role:角色,它其实是一组规则,定义了一组对 Kubernetes API 对象的操作权限。
- RoleBinding:定义了"被作用者"和"角色"的绑定关系
-
RBAC包括四种类型:Role、ClusterRole、RoleBinding、ClusterRoleBinding
-
Role 和 ClusterRole
- Role是一系列的权限的集合,Role只能授予单个namespace 中资源的访问权限。
- ClusterRole 跟 Role 类似,但是可以在集群中全局使用。
- Kubernetes 还提供了四个预先定义好的 ClusterRole 来供用户直接使用
- cluster-amdin、admin、edit、view
role授权实施
bash
[root@master ~]# mkdir role
[root@master ~]# cd role/
#生成role的yaml文件
kubectl create role myrole --dry-run=client --verb=get --resource pods -o yaml > myrole.yml
#更改文件内容
[root@master role]# vim myrole.yml
[root@master role]# cat myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: null
name: myrole
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- list
- create
- update
- path
- delete
#创建role
[root@master role]# kubectl apply -f myrole.yml
role.rbac.authorization.k8s.io/myrole created
[root@master role]# kubectl describe role myrole
Name: myrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get watch list create update path delete]
cpp
#建立角色绑定
[root@master role]# kubectl create rolebinding rin --role myrole --namespace default --user rin --dry-run=client -o yaml > rolebinding-myrole.yml
[root@master role]# vim rolebinding-myrole.yml
[root@master role]# cat rolebinding-myrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rin
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: myrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: rin
[root@master role]# kubectl apply -f rolebinding-myrole.yml
rolebinding.rbac.authorization.k8s.io/rin created
[root@master role]# kubectl get rolebindings.rbac.authorization.k8s.io rin
NAME ROLE AGE
rin Role/myrole 11s
cpp
#切换用户测试授权
[root@master role]# kubectl config use-context rin@kubernetes
Switched to context "rin@kubernetes".
[root@master role]# kubectl get pods
No resources found in default namespace.
[root@master role]# kubectl get svc
Error from server (Forbidden): services is forbidden: User "rin" cannot list resource "services" in API group "" in the namespace "default"
#切换回管理员
[root@master role]# kubectl config use-context kubernetes-admin@kubernetes
Switched to context "kubernetes-admin@kubernetes".
cpp
#建立集群角色
[root@master role]# kubectl create clusterrole myclusterrole --resource=deployment --verb get --dry-run=client -o yaml > myclusterrole.yml
[root@master role]# vim myclusterrole.yml
[root@master role]# cat myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: myclusterrole
rules:
- apiGroups:
- apps
resources:
- deployments
verbs:
- get
- list
- watch
- create
- update
- path
- delete
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- create
- update
- path
- delete
[root@master role]# kubectl apply -f myclusterrole.yml
clusterrole.rbac.authorization.k8s.io/myclusterrole created
[root@master role]# kubectl describe clusterrole myclusterrole
Name: myclusterrole
Labels: <none>
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
pods [] [] [get list watch create update path delete]
deployments.apps [] [] [get list watch create update path delete]
#建立集群角色绑定
[root@master role]# kubectl create clusterrolebinding clusterrolebind-myclusterrole --clusterrole myclusterrole --user rin --dry-run=client -o yaml > clusterrolebind-myclusterrole.yml
[root@master role]# vim clusterrolebind-myclusterrole.yml
[root@master role]# cat clusterrolebind-myclusterrole.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: clusterrolebind-myclusterrole
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: myclusterrole
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: rin
#测试:
[root@master role]# kubectl apply -f clusterrolebind-myclusterrole.yml
clusterrolebinding.rbac.authorization.k8s.io/clusterrolebind-myclusterrole created
[root@master role]# kubectl describe clusterrolebindings.rbac.authorization.k8s.io clusterrolebind-myclusterrole
Name: clusterrolebind-myclusterrole
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: myclusterrole
Subjects:
Kind Name Namespace
---- ---- ---------
User rin
[root@master role]# kubectl config use-context rin@kubernetes
Switched to context "rin@kubernetes".
[root@master role]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-controller-85bc7f665d-8dfjs 1/1 Running 0 90m
kube-system calico-kube-controllers-858ff477c9-s99fw 1/1 Running 0 90m
kube-system calico-node-8cdj8 1/1 Running 0 3h50m
kube-system calico-node-lcsjg 1/1 Running 0 3h50m
kube-system calico-node-nxvfz 1/1 Running 0 3h50m
kube-system calico-typha-999659975-rrc55 1/1 Running 0 3h50m
kube-system coredns-554b7d9f44-vtfv7 1/1 Running 4 (47h ago) 5d5h
kube-system coredns-554b7d9f44-x7nhg 1/1 Running 4 (47h ago) 5d5h
kube-system etcd-master.rin.com 1/1 Running 4 (47h ago) 5d5h
kube-system kube-apiserver-master.rin.com 1/1 Running 4 (47h ago) 5d5h
kube-system kube-controller-manager-master.rin.com 1/1 Running 4 (47h ago) 5d5h
kube-system kube-proxy-45wxm 1/1 Running 4 (47h ago) 5d4h
kube-system kube-proxy-5b7fm 1/1 Running 3 (47h ago) 5d4h
kube-system kube-proxy-jrdvn 1/1 Running 3 (47h ago) 5d4h
kube-system kube-scheduler-master.rin.com 1/1 Running 4 (47h ago) 5d5h
metallb-system controller-65957f77c8-thvpw 1/1 Running 0 90m
metallb-system speaker-5ljgq 1/1 Running 0 79m
metallb-system speaker-5n5kq 1/1 Running 9 (7h ago) 5d4h
metallb-system speaker-9fphp 1/1 Running 5 (47h ago) 5d4h
nfs-client-provisioner nfs-client-provisioner-5684b957f-tmz2h 1/1 Running 1 (47h ago) 2d
[root@master role]# kubectl get deployments.apps -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
ingress-nginx ingress-nginx-controller 1/1 1 1 5d5h
kube-system calico-kube-controllers 1/1 1 1 3h50m
kube-system calico-typha 1/1 1 1 3h50m
kube-system coredns 2/2 2 2 5d5h
metallb-system controller 1/1 1 1 5d4h
nfs-client-provisioner nfs-client-provisioner 1/1 1 1 2d
[root@master role]# kubectl get svc -A
Error from server (Forbidden): services is forbidden: User "rin" cannot list resource "services" in API group "" at the cluster scope
服务账户的自动化
服务账户准入控制器(Service account admission controller)
-
如果该 pod 没有 ServiceAccount 设置,将其 ServiceAccount 设为 default。
-
保证 pod 所关联的 ServiceAccount 存在,否则拒绝该 pod。
-
如果 pod 不包含 ImagePullSecrets 设置,那么 将 ServiceAccount 中的 ImagePullSecrets 信息添加到 pod 中。
-
将一个包含用于 API 访问的 token 的 volume 添加到 pod 中。
- 将挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource 添加到 pod 下的每个容器中。
服务账户控制器(Service account controller)
服务账户管理器管理各命名空间下的服务账户,并且保证每个活跃的命名空间下存在一个名为 "default" 的服务账户
名称 | 服务账户准入控制器(ServiceAccount Admission Controller) | 服务账户控制器(ServiceAccount Controller) |
---|---|---|
作用范围 | 作用于 Pod 层面,在 Pod 创建、更新等操作时,对与服务账户相关的配置进行校验和自动补充 | 作用于命名空间层面,管理各命名空间下服务账户的创建和维护 |
主要功能 | 1. 若 Pod 未设置 ServiceAccount,自动将其设为 default 2. 检查 Pod 关联的 ServiceAccount 是否存在,不存在则拒绝 Pod 创建或更新 3. 当 Pod 无 ImagePullSecrets 设置时,把 ServiceAccount 中的 ImagePullSecrets 信息添加到 Pod 4. 为 Pod 添加包含 API 访问 token 的 volume 5. 给 Pod 下每个容器添加挂载于 /var/run/secrets/kubernetes.io/serviceaccount 的 volumeSource | 确保每个活跃的命名空间下都存在一个名为 "default" 的服务账户 |
触发条件 | 当对 Pod 执行创建、更新等涉及服务账户相关配置变更的操作时触发 | 当有新的命名空间创建,或者已有命名空间状态发生变化(从非活跃变为活跃)时触发,检查是否存在 "default" 服务账户 |
对集群的影响 | 保障 Pod 能正确使用服务账户相关的配置,如获取访问 API 的凭证、拉取镜像等,从而正常运行 | 为命名空间提供默认的服务账户配置,方便在未显式指定服务账户时,Pod 能有一个可用的服务账户 |
与用户操作的关系 | 用户创建或更新 Pod 时,无需手动处理上述与服务账户相关的复杂配置,准入控制器会自动处理,但如果配置不符合规则,操作会被拒绝 | 用户创建命名空间后,无需手动创建 "default" 服务账户,控制器会自动生成,方便后续在该命名空间下创建 Pod |
部署helm
安装helm
cpp
[root@master helm]# ls
helm-v3.15.4-linux-amd64.tar.gz
[root@master helm]# tar xzf helm-v3.15.4-linux-amd64.tar.gz
[root@master helm]# ls
helm-v3.15.4-linux-amd64.tar.gz linux-amd64
[root@master helm]# cd linux-amd64/
[root@master linux-amd64]# ls
helm LICENSE README.md
[root@master linux-amd64]# cp -p helm /usr/local/bin/
配置helm命令补齐
cpp
[root@master linux-amd64]# echo "source <(helm completion bash)" >> ~/.bashrc
[root@master linux-amd64]# source ~/.bashrc
[root@master linux-amd64]# helm version
version.BuildInfo{
Version:"v3.15.4",
GitCommit:"fa9efb07d9d8debbb4306d72af76a383895aa8c4",
GitTreeState:"clean",
GoVersion:"go1.22.6"}
helm常用操作
命令 | 描述 |
---|---|
create | 创建一个 chart 并指定名字 |
dependency | 管理 chart 依赖 |
get | 下载一个 release。可用子命令:all、hooks、manifest、notes、values |
history | 获取 release 历史 |
install | 安装一个 chart |
list | 列出 release |
package | 将 chart 目录打包到 chart 存档文件中 |
pull | 从远程仓库中下载 chart 并解压到本地 # helm pull stable/mysql -- untar |
repo | 添加,列出,移除,更新和索引 chart 仓库。可用子命令:add、index、 list、remove、update |
rollback | 从之前版本回滚 |
search | 根据关键字搜索 chart。可用子命令:hub、repo |
show | 查看 chart 详细信息。可用子命令:all、chart、readme、values |
status | 显示已命名版本的状态 |
template | 本地呈现模板 |
uninstall | 卸载一个 release |
upgrade | 更新一个 release |
version | 查看 helm 客户端版本 |
查询官方应用中心
bash
helm search hub nginx #在官方仓库中搜索
helm search repo nginx #在本地仓库中搜索
管理第三方repo源
-
阿里云仓库:https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
-
bitnami仓库: https://charts.bitnami.com/bitnami
cpp
#添加阿里云仓库
[root@master calico]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
"aliyun" has been added to your repositories
#添加bitnami仓库
[root@master calico]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories
#查看仓库存储helm清单
[root@master calico]# helm search repo aliyun
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/acs-engine-autoscaler 2.1.3 2.1.1 Scales worker nodes within agent pools
aliyun/aerospike 0.1.7 v3.14.1.2 A Helm chart for Aerospike in Kubernetes
aliyun/anchore-engine 0.1.3 0.1.6 Anchore container analysis and policy evaluatio...
aliyun/artifactory 7.0.3 5.8.4 Universal Repository Manager supporting all maj...
#删除第三方存储库
[root@master calico]# helm repo list
NAME URL
aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
bitnami https://charts.bitnami.com/bitnami
[root@master calico]# helm repo remove aliyun
"aliyun" has been removed from your repositories
[root@master calico]# helm repo list
NAME URL
bitnami https://charts.bitnami.com/bitnami
helm的使用方法
查找chart
cpp
[root@master calico]# helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/nginx 21.1.23 1.29.1 NGINX Open Source is a web server that can be a...
bitnami/nginx-ingress-controller 12.0.7 1.13.1 NGINX Ingress Controller is an Ingress controll...
bitnami/nginx-intel 2.1.15 0.4.9 DEPRECATED NGINX Open Source for Intel is a lig...
查看chart信息
cpp
[root@master calico]# helm show chart aliyun/redis
apiVersion: v1
appVersion: 4.0.8
description: Open source, advanced key-value store. It is often referred to as a data
structure server since keys can contain strings, hashes, lists, sets and sorted
sets.
home: http://redis.io/
icon: https://bitnami.com/assets/stacks/redis/img/redis-stack-220x234.png
keywords:
- redis
- keyvalue
- database
maintainers:
- email: containers@bitnami.com
name: bitnami-bot
name: redis
sources:
- https://github.com/bitnami/bitnami-docker-redis
version: 1.1.15
安装chart 包
cpp
helm install rinleren bitnami/nginx
helm list
kubectl get pods
#查看项目的发布状态
helm status rinleren
#卸载项目
helm uninstall rinleren
kubectl get pods
helm list
安装项目前预定义项目选项
创建项目仓库:

cpp
#拉取项目
helm pull bitnami/nginx
[root@master helm]# ls
helm-v3.15.4-linux-amd64.tar.gz linux-amd64 nginx-1.27.1-debian-12-r2.tar nginx-18.1.11.tgz
[root@master helm]# ls
helm-v3.15.4-linux-amd64.tar.gz linux-amd64 nginx nginx-1.27.1-debian-12-r2.tar nginx-18.1.11.tgz
[root@master helm]# cd nginx/
[root@master nginx]# ls
Chart.lock charts Chart.yaml README.md templates values.schema.json values.yaml
[root@master nginx]# ls templates/ #项目模板
deployment.yaml hpa.yaml NOTES.txt serviceaccount.yaml
extra-list.yaml ingress-tls-secret.yaml pdb.yaml servicemonitor.yaml
health-ingress.yaml ingress.yaml prometheusrules.yaml svc.yaml
_helpers.tpl networkpolicy.yaml server-block-configmap.yaml tls-secret.yaml
#项目变量文件
[root@master nginx]# vim values.yaml
13 imageRegistry: "www.rin.com"
#上传项目所需要镜像到仓库
[root@master helm]# docker tag bitnami/nginx:1.27.1-debian-12-r2 www.rin.com/bitnami/nginx:1.27.1-debian-12-r2
[root@master helm]# docker push www.rin.com/bitnami/nginx:1.27.1-debian-12-r2
The push refers to repository [www.rin.com/bitnami/nginx]
30f5b1069b7f: Pushed
1.27.1-debian-12-r2: digest: sha256:6825a4d52b84873dd08c26d38dccce3d78d4d9f470b7555afdc4edfb4de7e595 size: 529
#安装本地项目
[root@master linux-amd64]# ls /mnt/helm/nginx
Chart.lock charts Chart.yaml README.md templates values.schema.json values.yaml
[root@master linux-amd64]# helm install rinleren /mnt/helm/nginx
NAME: rinleren
LAST DEPLOYED: Thu Aug 21 23:55:07 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 18.1.11
APP VERSION: 1.27.1
[root@master nginx]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 75m
rinleren-nginx LoadBalancer 10.108.210.236 172.25.254.50 80:30841/TCP,443:30874/TCP 34m
[root@master nginx]# kubectl get pods
NAME READY STATUS RESTARTS AGE
rinleren-nginx-54bd5757dd-6z5ts 1/1 Running 0 34m
#更新项目
[root@master nginx]# vim values.yaml #更新变量文件
624 type: ClusterIP
751 enabled: true
763 hostname: myapp.rin.com
783 ingressClassName: "nginx"
[root@master nginx]# helm upgrade rinleren .
Release "rinleren" has been upgraded. Happy Helming!
NAME: rinleren
LAST DEPLOYED: Fri Aug 22 00:00:16 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 18.1.11
APP VERSION: 1.27.1
[root@master nginx]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 75m
rinleren-nginx LoadBalancer 10.108.210.236 172.25.254.50 80:30841/TCP,443:30874/TCP 34m
[root@master nginx]# kubectl get pods
NAME READY STATUS RESTARTS AGE
rinleren-nginx-54bd5757dd-6z5ts 1/1 Running 0 34m
[root@master nginx]# curl myapp.rin.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#查看历史
[root@master nginx]# helm history rinleren
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
1 Thu Aug 21 23:55:07 2025 superseded nginx-18.1.11 1.27.1 Install complete
2 Fri Aug 22 00:00:16 2025 superseded nginx-18.1.11 1.27.1 Upgrade complete
3 Fri Aug 22 00:15:57 2025 superseded nginx-18.1.11 1.27.1 Upgrade complete
4 Fri Aug 22 00:28:32 2025 deployed nginx-18.1.11 1.27.1 Upgrade complete
#删除项目
[root@master nginx]# helm uninstall rinleren
release "rinleren" uninstalled
[root@master nginx]# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
构建helm中的chart包
Helm Chart目录结构
cpp
#简历chart项目
[root@master helm]# helm create rinleren
Creating rinleren
[root@master helm]# ls
rinleren
[root@master helm]# tree rinleren/
rinleren/
├── charts #目录里存放这个chart依赖的所有子chart。
├── Chart.yaml #用于描述这个 Chart 的基本信息,#包括名字、描述信息以及版本等。
├── templates #目录里面存放所有 yaml 模板文件。
│ ├── deployment.yaml
│ ├── _helpers.tpl #放置模板助手的地方,可以在整个 chart 中重复使用
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml #用于存储 templates 目录中模板文件中用到变量的值。
3 directories, 10 files
构建方法
cpp
[root@master rin]# cat Chart.yaml
apiVersion: v2
name: rinleren
description: A Helm chart for Kubernetes
type: application
version: 0.1.0 #项目版本
appVersion: "v1" #软件版本
[root@master rin]# cat values.yaml
image:
repository: myapp
pullPolicy: IfNotPresent
tag: "v1"
ingress:
enabled: true
className: "nginx"
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: myapp.rin.com
paths:
- path: /
pathType: ImplementationSpecific
#语法检测
[root@master rin]# helm lint .
==> Linting .
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed
#项目打包
[root@master rin]# cd ..
[root@master helm]# ls
rin rinleren
[root@master helm]# helm package rin/
Successfully packaged chart and saved it to: /root/helm/rinleren-0.1.0.tgz
[root@master helm]# ls
rin rinleren rinleren-0.1.0.tgz
#项目可以通过各种分享方式发方为任何人后部署即可
Prometheus简介
Prometheus是一个开源的服务监控系统和时序数据库
其提供了通用的数据模型和快捷数据采集、存储和查询接口
它的核心组件Prometheus服务器定期从静态配置的监控目标或者基于服务发现自动配置的目标中进行拉取数据
新拉取到啊的 数据大于配置的内存缓存区时,数据就会持久化到存储设备当中
Prometheus架构

组件功能:
-
监控代理程序:如node_exporter:收集主机的指标数据,如平均负载、CPU、内存、磁盘、网络等等多个维度的指标数据。
-
kubelet(cAdvisor):收集容器指标数据,也是K8S的核心指标收集,每个容器的相关指标数据包括:CPU使用率、限额、文件系统读写限额、内存使用率和限额、网络报文发送、接收、丢弃速率等等。
-
API Server:收集API Server的性能指标数据,包括控制队列的性能、请求速率和延迟时长等等
-
etcd:收集etcd存储集群的相关指标数据
-
kube-state-metrics:该组件可以派生出k8s相关的多个指标数据,主要是资源类型相关的计数器和元数据信息,包括制定类型的对象总数、资源限额、容器状态以及Pod资源标签系列等。

-
每个被监控的主机都可以通过专用的
exporter
程序提供输出监控数据的接口,并等待Prometheus
服务器周期性的进行数据抓取 -
如果存在告警规则,则抓取到数据之后会根据规则进行计算,满足告警条件则会生成告警,并发送到
Alertmanager
完成告警的汇总和分发 -
当被监控的目标有主动推送数据的需求时,可以以
Pushgateway
组件进行接收并临时存储数据,然后等待Prometheus
服务器完成数据的采集 -
任何被监控的目标都需要事先纳入到监控系统中才能进行时序数据采集、存储、告警和展示
-
监控目标可以通过配置信息以静态形式指定,也可以让Prometheus通过服务发现的机制进行动态管理
在k8s中部署Prometheus
下载部署Prometheus所需资源
cpp
658 image:
659 registry: quay.io
660 repository: prometheus/alertmanager
661 tag: v0.27.0
662 sha: ""
第一个包:
cpp
[root@master pag]# docker load -i prometheus-62.6.0.tar
1e604deea57d: Loading layer 1.458MB/1.458MB
6b83872188a9: Loading layer 2.455MB/2.455MB
405a8b5041f1: Loading layer 140.1MB/140.1MB
6dfd7aef3fcd: Loading layer 131.8MB/131.8MB
8459ad25bfbd: Loading layer 3.584kB/3.584kB
6cd41730d304: Loading layer 13.82kB/13.82kB
dc12aa49f027: Loading layer 28.16kB/28.16kB
f61f35ce8414: Loading layer 13.31kB/13.31kB
89911f5a9a27: Loading layer 5.632kB/5.632kB
e3df5cefd22c: Loading layer 139.8kB/139.8kB
64d73d1a1b52: Loading layer 1.536kB/1.536kB
b4d8bcc5b63b: Loading layer 6.144kB/6.144kB
Loaded image: quay.io/prometheus/prometheus:v2.54.1
e35b3fa843fa: Loading layer 105.4MB/105.4MB
09543938a98c: Loading layer 105.4MB/105.4MB
Loaded image: quay.io/thanos/thanos:v0.36.1
350d647fd75d: Loading layer 30.13MB/30.13MB
dd9de64bc0c2: Loading layer 37.35MB/37.35MB
7762046c5170: Loading layer 3.072kB/3.072kB
5f24d63cc50f: Loading layer 4.608kB/4.608kB
5f70bf18a086: Loading layer 1.024kB/1.024kB
Loaded image: quay.io/prometheus/alertmanager:v0.27.0
959f83fe7eee: Loading layer 46.32MB/46.32MB
Loaded image: quay.io/prometheus-operator/admission-webhook:v0.76.1
cb60fb9b862c: Loading layer 3.676MB/3.676MB
b952ff3a0beb: Loading layer 43.86MB/43.86MB
Loaded image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-
v1.5.1-58-g787ea74b6
6b2693025d4f: Loading layer 58.28MB/58.28MB
Loaded image: quay.io/prometheus-operator/prometheus-operator:v0.76.1
fb3a815a8d56: Loading layer 39.48MB/39.48MB
Loaded image: quay.io/prometheus-operator/prometheus-config-reloader:v0.76.1

cpp
[root@master pag]# docker tag quay.io/prometheus/prometheus:v2.54.1 www.rin.com/prometheus/prometheus:v2.54.1
[root@master pag]# docker push www.rin.com/prometheus/prometheus:v2.54.1
[root@master pag]# docker tag quay.io/prometheus/alertmanager:v0.27.0 www.rin.com/prometheus/alertmanager:v0.27.0
[root@master pag]# docker push www.rin.com/prometheus/alertmanager:v0.27.0

cpp
[root@master pag]# docker tag quay.io/thanos/thanos:v0.36.1 www.rin.com/thanos/thanos:v0.36.1
[root@master pag]# docker push www.rin.com/thanos/thanos:v0.36.1

cpp
[root@master pag]# docker tag quay.io/prometheus-operator/admission-webhook:v0.76.1 www.rin.com/prometheus-operator/admission-webhook:v0.76.1
[root@master pag]# docker push www.rin.com/prometheus-operator/admission-webhook:v0.76.1
[root@master pag]# docker tag quay.io/prometheus-operator/prometheus-operator:v0.76.1 www.rin.com/prometheus-operator/prometheus-operator:v0.76.1
[root@master pag]# docker push www.rin.com/prometheus-operator/prometheus-operator:v0.76.1
[root@master pag]# docker tag quay.io/prometheus-operator/prometheus-config-reloader:v0.76.1 www.rin.com/prometheus-operator/prometheus-config-reloader:v0.76.1
[root@master pag]# docker push www.rin.com/prometheus-operator/prometheus-config-reloader:v0.76.1

cpp
[root@master pag]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6 www.rin.com/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
[root@master pag]# docker push www.rin.com/ingress-nginx/kube-webhook-certgen:v20221220-controller-v1.5.1-58-g787ea74b6
第二个包:
cpp
[root@master pag]# docker load -i grafana-11.2.0.tar
d4fc045c9e3a: Loading layer 7.667MB/7.667MB
89b7e78c1264: Loading layer 2.56kB/2.56kB
63c0e4417684: Loading layer 8.806MB/8.806MB
de6ac4ba26dc: Loading layer 9.713MB/9.713MB
6523361408ca: Loading layer 190kB/190kB
11f6d7670b43: Loading layer 104.4kB/104.4kB
6271cd8eca5a: Loading layer 244.4MB/244.4MB
4beb5a56ae2e: Loading layer 215.2MB/215.2MB
9c67d537403a: Loading layer 37.89kB/37.89kB
e7392c8d1dd0: Loading layer 5.12kB/5.12kB
Loaded image: grafana/grafana:11.2.0
02f2bcb26af5: Loading layer 8.079MB/8.079MB
2b4aacde20ab: Loading layer 1.702MB/1.702MB
2d4dfc806acc: Loading layer 33.61MB/33.61MB
f17b1f60a500: Loading layer 5.12kB/5.12kB
b460a573bcd3: Loading layer 11MB/11MB
fa77f493bd78: Loading layer 1.536kB/1.536kB
3c5db9d441df: Loading layer 33.09MB/33.09MB
Loaded image: quay.io/kiwigrid/k8s-sidecar:1.27.4
78561cef0761: Loading layer 8.082MB/8.082MB
7cc8f366b357: Loading layer 116.4MB/116.4MB
daa732f7b271: Loading layer 5.596MB/5.596MB
28576ee1ff32: Loading layer 3.584kB/3.584kB
da5ace9a9a44: Loading layer 2.56kB/2.56kB
efde071ed7b6: Loading layer 675.8MB/675.8MB
5f70bf18a086: Loading layer 1.024kB/1.024kB
59a9cb4b149e: Loading layer 12.29kB/12.29kB
d541e36bb231: Loading layer 401.5MB/401.5MB
1d0b500916c4: Loading layer 194.6kB/194.6kB
d127d61d966a: Loading layer 11.78kB/11.78kB
491113abcd19: Loading layer 4.608kB/4.608kB
f341cc92dfca: Loading layer 4.096kB/4.096kB
Loaded image: grafana/grafana-image-renderer:latest
4fc242d58285: Loading layer 5.855MB/5.855MB
8c8a77c39b5d: Loading layer 7.602MB/7.602MB
d84d68cc0c42: Loading layer 3.584kB/3.584kB
ab8bab9f30cb: Loading layer 11.26kB/11.26kB
a6e091de7872: Loading layer 3.218MB/3.218MB
373940d283b5: Loading layer 37.13MB/37.13MB
394979703c64: Loading layer 3.072kB/3.072kB
34dbd88eb82a: Loading layer 502.3kB/502.3kB
e8f538a7dfac: Loading layer 1.536kB/1.536kB
5f70bf18a086: Loading layer 1.024kB/1.024kB
Loaded image: bats/bats:v1.4.1

cpp
[root@master pag]# docker tag grafana/grafana:11.2.0 www.rin.com/grafana/grafana:11.2.0
[root@master pag]# docker push www.rin.com/grafana/grafana:11.2.0
[root@master pag]# docker tag grafana/grafana-image-renderer:latest www.rin.com/grafana/grafana-image-renderer:latest
[root@master pag]# docker push www.rin.com/grafana/grafana-image-renderer:latest

cpp
[root@master pag]# docker tag quay.io/kiwigrid/k8s-sidecar:1.27.4 www.rin.com/kiwigrid/k8s-sidecar:1.27.4
[root@master pag]# docker push www.rin.com/kiwigrid/k8s-sidecar:1.27.4

cpp
[root@master pag]# docker tag bats/bats:v1.4.1 www.rin.com/bats/bats:v1.4.1
[root@master pag]# docker push www.rin.com/bats/bats:v1.4.1
第三个包:
cpp
[root@master pag]# docker load -i node-exporter-v1.8.2.tar
4f3f7dd00054: Loading layer 20.5MB/20.5MB
Loaded image: reg.timinglee.org/prometheus/node-exporter:v1.8.2
cpp
[root@master pag]# docker tag reg.timinglee.org/prometheus/node-exporter:v1.8.2 www.rin.com/prometheus/node-exporter:v1.8.2
[root@master pag]# docker push www.rin.com/prometheus/node-exporter:v1.8.2
第四个包:
cpp
[root@master pag]# docker load -i kube-state-metrics-2.13.0.tar
a5abb629b070: Loading layer 43.49MB/43.49MB
Loaded image: registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0
a9155867c889: Loading layer 68.39MB/68.39MB
Loaded image: quay.io/brancz/kube-rbac-proxy:v0.18.0

cpp
[root@master pag]# docker tag registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.13.0 www.rin.com/kube-state-metrics/kube-state-metrics:v2.13.0
[root@master pag]# docker push www.rin.com/kube-state-metrics/kube-state-metrics:v2.13.0

cpp
[root@master pag]# docker tag quay.io/brancz/kube-rbac-proxy:v0.18.0 www.rin.com/brancz/kube-rbac-proxy:v0.18.0
[root@master pag]# docker push www.rin.com/brancz/kube-rbac-proxy:v0.18.0
第五个包:
cpp
[root@master pag]# docker load -i nginx-exporter-1.3.0-debian-12-r2.tar
016ff07f0ae3: Loading layer 149.3MB/149.3MB
Loaded image: bitnami/nginx-exporter:1.3.0-debian-12-r2

cpp
[root@master pag]# docker tag bitnami/nginx-exporter:1.3.0-debian-12-r2 www.rin.com/bitnami/nginx-exporter:1.3.0-debian-12-r2
[root@master pag]# docker push www.rin.com/bitnami/nginx-exporter:1.3.0-debian-12-r2
编辑文件:
cpp
[root@master kube-prometheus-stack]# vim values.yaml
227 imageRegistry: "www.rin.com"
cpp
#利用helm安装Prometheus
[root@master kube-prometheus-stack]# kubectl create namespace kube-prometheus-stack
namespace/kube-prometheus-stack created
#注意,在安装过程中千万别ctrl+c
[root@master kube-prometheus-stack]# helm -n kube-prometheus-stack install kube-prometheus-stack .
NAME: kube-prometheus-stack
LAST DEPLOYED: Fri Aug 22 04:26:30 2025
NAMESPACE: kube-prometheus-stack
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace kube-prometheus-stack get pods -l "release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
#查看所有pod是否运行
[root@master kube-prometheus-stack]# kubectl --namespace kube-prometheus-stack get pods
NAME READY STATUS RESTARTS AGE
alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 0 74s
kube-prometheus-stack-grafana-78f59b54cc-rmrrg 3/3 Running 0 97s
kube-prometheus-stack-kube-state-metrics-5bcc97f7fd-4nzqx 1/1 Running 0 97s
kube-prometheus-stack-operator-569dfb64b9-5jmqt 1/1 Running 0 97s
kube-prometheus-stack-prometheus-node-exporter-7wvgn 1/1 Running 0 97s
kube-prometheus-stack-prometheus-node-exporter-dgc9p 1/1 Running 0 97s
kube-prometheus-stack-prometheus-node-exporter-wltvm 1/1 Running 0 97s
prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 73s
#查看svc
[root@master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 93s
kube-prometheus-stack-alertmanager ClusterIP 10.101.9.30 <none> 9093/TCP,8080/TCP 116s
kube-prometheus-stack-grafana ClusterIP 10.102.49.82 <none> 80/TCP 116s
kube-prometheus-stack-kube-state-metrics ClusterIP 10.110.235.17 <none> 8080/TCP 116s
kube-prometheus-stack-operator ClusterIP 10.111.10.239 <none> 443/TCP 116s
kube-prometheus-stack-prometheus ClusterIP 10.98.131.116 <none> 9090/TCP,8080/TCP 116s
kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.109.138.3 <none> 9100/TCP 116s
prometheus-operated ClusterIP None <none> 9090/TCP 92s
#修改暴漏方式
[root@master kube-prometheus-stack]# kubectl -n kube-prometheus-stack edit svc kube-prometheus-stack-grafana
39 type: LoadBalancer
#再次查看
[root@master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 2m51s
kube-prometheus-stack-alertmanager ClusterIP 10.101.9.30 <none> 9093/TCP,8080/TCP 3m14s
kube-prometheus-stack-grafana LoadBalancer 10.102.49.82 172.25.254.50 80:32561/TCP 3m14s
kube-prometheus-stack-kube-state-metrics ClusterIP 10.110.235.17 <none> 8080/TCP 3m14s
kube-prometheus-stack-operator ClusterIP 10.111.10.239 <none> 443/TCP 3m14s
kube-prometheus-stack-prometheus ClusterIP 10.98.131.116 <none> 9090/TCP,8080/TCP 3m14s
kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.109.138.3 <none> 9100/TCP 3m14s
prometheus-operated ClusterIP None <none> 9090/TCP
访问页面:

登陆grafana
cpp
#查看grafana密码
[root@master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get secrets kube-prometheus-stack-grafana -o yaml
data:
admin-password: cHJvbS1vcGVyYXRvcg==
admin-user: YWRtaW4=
#查看密码
[root@master kube-prometheus-stack]# echo -n "cHJvbS1vcGVyYXRvcg==" | base64 -d
prom-operator #密码
[root@master kube-prometheus-stacYWRtaW4= -n "YWRtaW4=" | base64 -d
admin #用户
登录:


导入面板
官方监控模板:Grafana dashboards | Grafana Labs
导入步骤如下:





导入后效果如下

访问Prometheus 主程序
cpp
[root@master kube-prometheus-stack]# kubectl -n kube-prometheus-stack edit svc kube-prometheus-stack-prometheus
48 type: LoadBalancer
[root@master kube-prometheus-stack]# kubectl -n kube-prometheus-stack get svc kube-prometheus-stack-prometheus
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-prometheus-stack-prometheus LoadBalancer 10.98.131.116 172.25.254.51 9090:32512/TCP,8080:32757/TCP 15m


监控使用示例
建立监控项目
cpp
#下载示例所需helm项目
[root@master pag]# ls
grafana-11.2.0.tar nginx-18.1.11.tgz prometheus-adapter
kube-prometheus-stack nginx-exporter-1.3.0-debian-12-r2.tar prometheus-adapter-4.11.0.tgz
kube-prometheus-stack-62.6.0.tgz node-exporter-v1.8.2.tar
kube-state-metrics-2.13.0.tar prometheus-62.6.0.tar
[root@master pag]# tar xzf nginx-18.1.11.tgz
[root@master pag]# ls
grafana-11.2.0.tar nginx prometheus-62.6.0.tar
kube-prometheus-stack nginx-18.1.11.tgz prometheus-adapter
kube-prometheus-stack-62.6.0.tgz nginx-exporter-1.3.0-debian-12-r2.tar prometheus-adapter-4.11.0.tgz
kube-state-metrics-2.13.0.tar node-exporter-v1.8.2.tar
[root@master pag]# cd nginx/
#修改项目开启监控
[root@master nginx]# vim values.yaml
925 metrics:
926 ## @param metrics.enabled Start a Prometheus exporter sidecar container
927 ##
928 enabled: true
1015 serviceMonitor:
1016 ## @param metrics.serviceMonitor.enabled Creates a Prometheus Operator ServiceMonitor (also requires `metrics.enabl ed` to be `true`)
1017 ##
1018 enabled: true
1019 ## @param metrics.serviceMonitor.namespace Namespace in which Prometheus is running
1020 ##
1021 namespace: "kube-prometheus-stack"
1046 labels:
1047 release: kube-prometheus-stack
[root@master nginx]# kubectl -n kube-prometheus-stack get servicemonitors.monitoring.coreos.com --show-labels
NAME AGE LABELS
kube-prometheus-stack-alertmanager 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-alertmanager,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-apiserver 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-apiserver,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-coredns 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-coredns,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-grafana 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=grafana,app.kubernetes.io/version=11.2.0,helm.sh/chart=grafana-8.5.1
kube-prometheus-stack-kube-controller-manager 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-kube-controller-manager,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-kube-etcd 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-kube-etcd,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-kube-proxy 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-kube-proxy,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-kube-scheduler 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-kube-scheduler,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-kube-state-metrics 41m app.kubernetes.io/component=metrics,app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=kube-state-metrics,app.kubernetes.io/part-of=kube-state-metrics,app.kubernetes.io/version=2.13.0,helm.sh/chart=kube-state-metrics-5.25.1,release=kube-prometheus-stack
kube-prometheus-stack-kubelet 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-kubelet,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-operator 41m app.kubernetes.io/component=prometheus-operator,app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=kube-prometheus-stack-prometheus-operator,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-operator,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-prometheus 41m app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/part-of=kube-prometheus-stack,app.kubernetes.io/version=62.6.0,app=kube-prometheus-stack-prometheus,chart=kube-prometheus-stack-62.6.0,heritage=Helm,release=kube-prometheus-stack
kube-prometheus-stack-prometheus-node-exporter 41m app.kubernetes.io/component=metrics,app.kubernetes.io/instance=kube-prometheus-stack,app.kubernetes.io/managed-by=Helm,app.kubernetes.io/name=prometheus-node-exporter,app.kubernetes.io/part-of=prometheus-node-exporter,app.kubernetes.io/version=1.8.2,helm.sh/chart=prometheus-node-exporter-4.39.0,release=kube-prometheus-stack
安装项目:
cpp
[root@master nginx]# ls
Chart.lock charts Chart.yaml README.md templates values.schema.json values.yaml
[root@master nginx]# helm install rinrin .
NAME: rinrin
LAST DEPLOYED: Fri Aug 22 05:12:13 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 18.1.11
APP VERSION: 1.27.1
[root@master nginx]# kubectl get pods
NAME READY STATUS RESTARTS AGE
rinrin-nginx-5dfdf9f9-hgh2f 2/2 Running 0 50s
[root@master nginx]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h59m
rinrin-nginx LoadBalancer 10.99.209.146 172.25.254.52 80:32626/TCP,443:30793/TCP,9113:31136/TCP 56s
[root@master nginx]# curl 172.25.254.52
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
#压力测试
[root@master nginx]# ab -c 5 -n 100 http://172.25.254.52/index.html
监控调整

