1、权限控制RBDC
题目
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8sContext为部署流水线创建一个新的 ClusterRole 并将其绑定到范围为特定的 namespace 的特定 ServiceAccount。Task创建一个名为 deployment-clusterrole 且仅允许创建以下资源类型的新 ClusterRole:DeploymentStatefulSetDaemonSet在现有的 namespace app-team1 中创建一个名为 cicd-token 的新 ServiceAccount。限于 namespace app-team1 中,将新的 ClusterRole deployment-clusterrole 绑定到新的 ServiceAccount cicd-token。
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#考试必做步骤candidate@node01:~$ kubectl config use-context k8s #必做,形成肌肉记忆
candidate@node01:~$ kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsetsclusterrole.rbac.authorization.k8s.io/deployment-clusterrole created
candidate@node01:~$ kubectl -n app-team1 create serviceaccount cicd-tokenserviceaccount/cicd-token created
# 题目中写了"限于 namespace app-team1 中",则创建 rolebinding(为特定角色)。没有写的话,则创建clusterrolebinding(集群角色)。rolebinding 后面的名字 cicd-token-rolebinding 随便起的,因为题目中没有要求,如果题目中有要求,就不能随便起了。candidate@node01:~$ kubectl -n app-team1 create rolebinding cicd-token-rolebinding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-tokenrolebinding.rbac.authorization.k8s.io/cicd-token-rolebinding created#检查candidate@node01:~$ kubectl -n app-team1 describe rolebinding cicd-token-rolebindingName: cicd-token-rolebindingLabels: <none>Annotations: <none>Role: Kind: ClusterRole Name: deployment-clusterroleSubjects: Kind Name Namespace ---- ---- --------- ServiceAccount cicd-token app-team1
candidate@node01:~$ kubectl auth can-i create deployment --as system:serviceaccount:app-team1:cicd-tokenno
candidate@node01:~$ kubectl auth can-i create deployment -n app-team1 --as system:serviceaccount:app-team1:cicd-tokenyes
2、查看 pod 的 CPU
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8sTask通过 pod label name=cpu-loader,找到运行时占用大量 CPU 的 pod,并将占用 CPU 最高的 pod 名称写入文件 /opt/KUTR000401/KUTR00401.txt(已存在)
考点:kubectl top -l 命令的使用
操作 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
#考试必做步骤candidate@node01:~$ kubectl config use-context k8s #必做,形成肌肉记忆
candidate@node01:~$ kubectl top pod -l name=cpu-loader --sort-by=cpu -ANAMESPACE NAME CPU(cores) MEMORY(bytes) cpu-top redis-test-5db498bbd-h2mfj 2m 7Mi cpu-top nginx-host-c58757c-q6k74 0m 5Mi cpu-top test0-784f495b5c-2dqdv 0m 4Mi candidate@node01:~$ echo "redis-test-5db498bbd-h2mfj" >> /opt/KUTR000401/KUTR00401.txtcandidate@node01:~$ cat /opt/KUTR000401/KUTR00401.txtredis-test-5db498bbd-h2mfj
3、配置网络策略 NetworkPolicy
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task在现有的 namespace my-app 中创建一个名为 allow-port-from-namespace 的新 NetworkPolicy。确保新的 NetworkPolicy 允许 namespace echo 中的 Pods 连接到 namespace my-app 中的 Pods 的 9000 端口。
进一步确保新的 NetworkPolicy:不允许对没有在监听端口 9000 的 Pods 的访问不允许非来自 namespace echo 中的 Pods 的访问
双重否定就是肯定,所以最后两句话的意思就是:仅允许端口为 9000 的 pod 方法。仅允许 echo 命名空间中的 pod 访问。
考点:NetworkPolicy 的创建
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#查看有无要求的名称标签,如访问者的 namespace 没有标签 label,则需要手动打一个。如果有一个独特的标签 label,则也可以直接使用candidate@node01:~$ kubectl get ns --show-labelsNAME STATUS AGE LABELSapp-team1 Active 271d kubernetes.io/metadata.name=app-team1calico-apiserver Active 271d kubernetes.io/metadata.name=calico-apiserver,name=calico-apiserver,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=privilegedcalico-system Active 271d kubernetes.io/metadata.name=calico-system,name=calico-system,pod-security.kubernetes.io/enforce-version=latest,pod-security.kubernetes.io/enforce=privilegedcpu-top Active 271d kubernetes.io/metadata.name=cpu-topdefault Active 271d kubernetes.io/metadata.name=defaultecho Active 271d kubernetes.io/metadata.name=echo,project=echoing-internal Active 271d kubernetes.io/metadata.name=ing-internalingress-nginx Active 271d kubernetes.io/metadata.name=ingress-nginxinternal Active 271d kubernetes.io/metadata.name=internalkube-node-lease Active 271d kubernetes.io/metadata.name=kube-node-leasekube-public Active 271d kubernetes.io/metadata.name=kube-publickube-system Active 271d kubernetes.io/metadata.name=kube-systemmy-app Active 271d kubernetes.io/metadata.name=my-apptigera-operator Active 271d kubernetes.io/metadata.name=tigera-operator,name=tigera-operator
#手动打一个标签candidate@node01:~$ kubectl label ns echo project=echonamespace/echo labeled
candidate@node01:~$ kubectl get ns echo --show-labelsNAME STATUS AGE LABELSecho Active 271d kubernetes.io/metadata.name=echo,project=echo
candidate@node01:~$ cat networkpolicy.yaml apiVersion: networking.k8s.io/v1kind: NetworkPolicymetadata: name: allow-port-from-namespace namespace: my-appspec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: echo ports: - protocol: TCP port: 9000
candidate@node01:~$ kubectl apply -f networkpolicy.yaml networkpolicy.networking.k8s.io/allow-port-from-namespace created
#检查candidate@node01:~$ kubectl describe networkpolicies -n my-app Name: allow-port-from-namespaceNamespace: my-appCreated on: 2024-02-16 16:43:37 +0800 CSTLabels: <none>Annotations: <none>Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: 9000/TCP From: NamespaceSelector: project=echo Not affecting egress traffic Policy Types: Ingress
4、暴露服务 service
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task请重新配置现有的 deployment front-end 以及添加名为 http 的端口规范来公开现有容器 nginx 的端口 80/tcp。创建一个名为 front-end-svc 的新 service,以公开容器端口 http。配置此 service,以通过各个 Pod 所在的节点上的 NodePort 来公开他们。
考点:将现有的 deploy 暴露成 NodePort 的 service。
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(linecandidate@node01:~$ kubectl get deployment front-end -o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORfront-end 1/1 1 1 271d nginx vicuu/nginx:hello app=front-endcandidate@node01:~$ kubectl edit deployment front-end deployment.apps/front-end edited
......
template: metadata: creationTimestamp: null labels: app: front-end spec: containers: - image: vicuu/nginx:hello imagePullPolicy: IfNotPresent name: nginx #找到此位置。添加以下四行 ports: - name: http containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30status: availableReplicas: 1 conditions: - lastTransitionTime: "2023-08-27T04:41:17Z" lastUpdateTime: "2023-08-27T04:41:17Z" message: Deployment has minimum availability. reason: MinimumReplicasAvailable status: "True" type: Available
#暴露对应端口,注意考试中需要创建的是 NodePort,还是 ClusterIP。如果是 ClusterIP,则应为--type=ClusterIP --port 是 service 的端口号,--target-port 是 deployment 里 pod 的容器的端口号。 candidate@node01:~$ kubectl expose deployment front-end --type=NodePort --port=80 --target-port=80 --name=front-end-svcservice/front-end-svc exposed
#暴露服务后,检查一下 service 的 selector 标签是否正确,这个要与 deployment 的 selector 标签一致的。candidate@node01:~$ kubectl get pod,svc -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESpod/11-factor-app 1/1 Running 6 (158m ago) 271d 10.244.196.182 node01 <none> <none>pod/foo 1/1 Running 6 (158m ago) 271d 10.244.196.178 node01 <none> <none>pod/front-end-55f9bb474b-mx49g 1/1 Running 0 83m 10.244.196.184 node01 <none> <none>pod/presentation-856b8578cd-gd28p 1/1 Running 6 (158m ago) 271d 10.244.140.124 node02 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTORservice/front-end-svc NodePort 10.111.50.109 <none> 80:31203/TCP 76m app=front-endservice/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 272d <none>
#检查candidate@node01:~$ curl node01:31203Hello World ^_^candidate@node01:~$ curl 10.111.50.109:80Hello World ^_^
注:考试时,如果 curl 不通,简单排错后也不通,就不要过于纠结,继续往下做题即可。因为部分同学反馈 curl 不通,不清楚是否为考试集群环境的问题。(有同学反馈是要 ssh 到 master 上,才能 curl 通,所以这道题不检查也行的)只要确保都做对了,即使 curl 不通,也最多扣几分而已,是有其他步骤分的。
5、创建Ingress
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task如下创建一个新的 nginx Ingress 资源:名称: pingNamespace: ing-internal使用服务端口 5678 在路径 /hello 上公开服务 hello可以使用以下命令检查服务 hello 的可用性,该命令应返回 hello:curl -kL <INTERNAL_IP>/hello
考点:Ingress 的创建
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#官方网站https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/#切换集群candidate@master01:~$ kubectl config use-context k8s
#先编写 Ingressclass 的 yamlcandidate@node01:~$ cat ingressclass.yaml apiVersion: networking.k8s.io/v1kind: IngressClassmetadata: labels: app.kubernetes.io/component: controller name: nginx #修改此处name与ingress.yaml中ingressClassName保持一致 annotations: ingressclass.kubernetes.io/is-default-class: "true"spec: controller: k8s.io/ingress-nginx
candidate@node01:~$ kubectl apply -f ingressclass.yaml
#然后主机上创建Ingress.yaml文件candidate@master01:~$ cat ingress.yaml apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: ping namespace: ing-internal annotations: nginx.ingress.kubernetes.io/rewrite-target: /spec: ingressClassName: nginx #修改此处ingressClassName与ingressclass.yaml中name保持一致 rules: - http: paths: - path: /hello pathType: Prefix backend: service: name: hello port: number: 5678
candidate@master01:~$ kubectl apply -f ingress.yaml
#检查candidate@node01:~$ kubectl get ingress -n ing-internal NAME CLASS HOSTS ADDRESS PORTS AGEping nginx-example * 10.105.186.107 80 13m
#或kubectl edit ingress -n ing-internal ping执行以下命令找到IPcandidate@master01:~$ kubectl edit ingress -n ing-internal ping
#curl -kL <INTERNAL_IP>/hello测试,出现hello则证明成功candidate@node01:~$ curl 10.105.186.107/helloHello World ^_^
6、扩容 deployment 副本数量
考题
ounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8sTask将 deployment presentation 扩展至 4 个 pods
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(linecandidate@master01:~$ kubectl config use-context k8s
#查看现有pod的数量candidate@master01:~$ kubectl get deployments.apps presentation NAME READY UP-TO-DATE AVAILABLE AGEpresentation 1/1 1 1 276d
#进行扩容操作candidate@master01:~$ kubectl scale deployment presentation --replicas=4
#检查candidate@master01:~$ kubectl get deployments.apps presentation -o wideNAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTORpresentation 4/4 4 4 276d nginx vicuu/nginx:hello app=presentation
candidate@master01:~$ kubectl get pod -l app=presentationNAME READY STATUS RESTARTS AGEpresentation-856b8578cd-4t2vh 1/1 Running 0 5m49spresentation-856b8578cd-dtd6k 1/1 Running 0 5m49spresentation-856b8578cd-f84zd 1/1 Running 0 5m49spresentation-856b8578cd-gd28p 1/1 Running 6 (50m ago) 276d
#注:检查,如果显示 ContainerCreating,则表示你的集群有问题,99%是因为你对 VMware 快照的错误操作。请务必确保你的 3 台虚拟机是还原到一个关机状态的快照(如初始化快照)。而不是还原到一个开机状态下打的快照!
7、调度 pod 到指定节点
考题 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task按如下要求调度一个 pod:名称:nginx-kusc00401Image:nginxNode selector:disk=ssd
考点:nodeSelect 属性的使用
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#官方文档https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/candidate@master01:~$ kubectl config use-context k8s
#查看disk=ssd是那个节点,此处是是node1节点,后面新建pod需调度到node1节点。candidate@master01:~$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELSmaster01 Ready control-plane 276d v1.28.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master01,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node.kubernetes.io/exclude-from-external-load-balancers=node01 Ready <none> 276d v1.28.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linuxnode02 Ready <none> 276d v1.28.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux
#生成创建yaml文件candidate@master01:~$ kubectl run nginx-kusc00401 --image=nginx --dry-run=client -oyaml > pod-nginx.yaml candidate@master01:~$ cat pod-nginx.yaml apiVersion: v1kind: Podmetadata: creationTimestamp: null labels: run: nginx-kusc00401 name: nginx-kusc00401spec: containers: - image: nginx name: nginx-kusc00401 resources: {}#添加以下两行内容*************** nodeSelector: disk: ssd#注意nodeSelector的大小写******
#生成podcandidate@master01:~$ kubectl apply -f pod-nginx.yaml pod/nginx-kusc00401 created
#检查pod节点是否被调度到node1节点candidate@master01:~$ kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES11-factor-app 1/1 Running 6 (66m ago) 276d 10.244.196.182 node01 <none> <none>foo 1/1 Running 6 (66m ago) 276d 10.244.196.179 node01 <none> <none>front-end-cfdfbdb95-v7q4t 1/1 Running 7 (66m ago) 276d 10.244.140.66 node02 <none> <none>nginx-kusc00401 1/1 Running 0 5m26s 10.244.196.186 node01 <none> <none>presentation-856b8578cd-4t2vh 1/1 Running 0 21m 10.244.196.185 node01 <none> <none>presentation-856b8578cd-dtd6k 1/1 Running 0 21m 10.244.196.184 node01 <none> <none>presentation-856b8578cd-f84zd 1/1 Running 0 21m 10.244.140.70 node02 <none> <none>presentation-856b8578cd-gd28p 1/1 Running 6 (66m ago) 276d 10.244.140.65 node02 <none> <none>
8、查看可用节点数量
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task检查有多少 nodes 已准备就绪(不包括被打上 Taint:NoSchedule 的节点),并将数量写入 /opt/KUSC00402/kusc00402.txt
操作 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
#操作之前必做操作candidate@master01:~$ kubectl config use-context k8s
#先检查一共有多少就绪节点,状态为 Ready 表示就绪,下图可以看到一共是 3 个就绪节点candidate@master01:~$ kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready control-plane 276d v1.28.0node01 Ready <none> 276d v1.28.0node02 Ready <none> 276d v1.28.0
#因为题目要求,不包括被打上 Taint:NoSchedule 的就绪节点,所以要排除 NoSchedule 的。candidate@master01:~$ kubectl describe nodes|grep -i taintsTaints: node-role.kubernetes.io/control-plane:NoScheduleTaints: <none>Taints: <none>
#所以节点数量为2candidate@master01:~$ echo 2 > /opt/KUSC00402/kusc00402.txt
9、创建多容器的 pod
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task按如下要求调度一个 Pod:名称:kucc8app containers: 2container 名称/images:⚫ nginx⚫ consul
考点:pod 概念
操作 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
#切换集群环境:[candidate@node-1] $ kubectl config use-context k8s
candidate@master01:~$ kubectl run kucc8 --image=nginx --dry-run=client -oyaml > pod-kucc.yaml
candidate@master01:~$ cat pod-kucc.yaml apiVersion: v1kind: Podmetadata: creationTimestamp: null labels: run: kucc8 name: kucc8spec: containers: - image: nginx name: kucc8 imagePullPolicy: IfNotPresent #此项考试是可不加,测试建议加,pod启动速度快 - image: consul name: consul imagePullPolicy: IfNotPresent
candidate@master01:~$ kubectl apply -f pod-kucc8.yaml
candidate@master01:~$ kubectl get pod -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES11-factor-app 1/1 Running 6 (9m38s ago) 276d 10.244.196.178 node01 <none> <none>foo 1/1 Running 6 (9m37s ago) 276d 10.244.196.179 node01 <none> <none>front-end-cfdfbdb95-v7q4t 1/1 Running 7 (9m33s ago) 276d 10.244.140.122 node02 <none> <none>kucc8 2/2 Running 0 78s 10.244.140.70 node02 <none> <none>presentation-856b8578cd-gd28p 1/1 Running 6 (9m33s ago) 276d 10.244.140.126 node02 <none> <none>
10、创建 PV
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
设置配置环境:[candidate@node-1] $ kubectl config use-context hk8s
Task创建名为 app-config 的 persistent volume,容量为 1Gi,访问模式为 ReadWriteMany。volume 类型为 hostPath,位于 /srv/app-config
考点:hostPath 类型的 pv ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
#复制官方文档https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
[candidate@node-1] $ kubectl config use-context hk8s
[candidate@node-1] $ cat pv-volume.yaml apiVersion: v1kind: PersistentVolumemetadata: name: app-configspec: capacity: storage: 1Gi accessModes: - ReadWriteMany hostPath: path: "/srv/app-config"
[candidate@node-1] $ kubectl get pv --show-labels NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE LABELSapp-config 1Gi RWX Retain Available 47s <none>pv01 10Mi RWO Retain Available csi-hostpath-sc 284d <none>
11、创建PVC
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context ok8s
Task创建一个新的 PersistentVolumeClaim:名称: pv-volumeClass: csi-hostpath-sc容量: 10Mi
创建一个新的 Pod,来将 PersistentVolumeClaim 作为 volume 进行挂载:名称:web-serverImage:nginx:1.16挂载路径:/usr/share/nginx/html
配置新的 Pod,以对 volume 具有 ReadWriteOnce 权限。
最后,使用 kubectl edit 或 kubectl patch 将 PersistentVolumeClaim 的容量扩展为 70Mi,并记录此更改。
考点:pvc 的创建 class 属性的使用,--record 记录变更
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#官方文档https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
#切换集群环境:[candidate@node-1] $ kubectl config use-context ok8s
candidate@master01:~$ cat pvc.yaml apiVersion: v1kind: PersistentVolumeClaimmetadata: name: pv-volumespec: storageClassName: csi-hostpath-sc accessModes: - ReadWriteOnce resources: requests: storage: 10Mi
#检查创建的pvccandidate@master01:~$ kubectl apply -f pvc.yaml
candidate@master01:~$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEpv-volume Bound pv01 10Mi RWO csi-hostpath-sc 3m3s
#生成pod的yaml文件candidate@master01:~$ cat pvc-pod.yaml apiVersion: v1kind: Podmetadata: name: web-serverspec: volumes: - name: task-pv-storage persistentVolumeClaim: claimName: pv-volume containers: - name: nginx image: nginx:1.16 ports: - containerPort: 80 name: "http-server" volumeMounts: - mountPath: "/usr/share/nginx/html" name: task-pv-storage
#创建pod及检查,pod已生成,并running状态candidate@master01:~$ kubectl apply -f pvc-pod.yamlcandidate@node01:~$ kubectl get pod web-server -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb-server 1/1 Running 0 3m15s 10.244.140.77 node02 <none> <none>
#检查,注意模拟环境使用的是 nfs 做后端存储,是不支持动态扩容 pvc 的(:wq 保存退出时,会报错)。所以最后一步修改为 70Mi,只是操作一下即可。换成 ceph 做后端存储,可以,但是集群资源太少,无法做 ceph。candidate@master01:~$ kubectl edit pvc --recordFlag --record has been deprecated, --record will be removed in the futureerror: persistentvolumeclaims "pv-volume" could not be patched: persistentvolumeclaims "pv-volume" is forbidden: only dynamically provisioned pvc can be resized and the storageclass that provisions the pvc must support resizeYou can run `kubectl replace -f /tmp/kubectl-edit-532133728.yaml` to try this update again.
image-20240221153126708
12、查看 pod 日志
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context k8s
Task监控 pod foo 的日志并:提取与错误 RLIMIT_NOFILE 相对应的日志行将这些日志行写入 /opt/KUTR00101/foo
考点:kubectl logs 命令
操作 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
#必做操作candidate@master01:~$ kubectl config use-context k8s
#查看题目中podcandidate@master01:~$ kubectl get podNAME READY STATUS RESTARTS AGE11-factor-app 1/1 Terminating 5 (172d ago) 276dfoo 1/1 Terminating 5 (172d ago) 276dfront-end-cfdfbdb95-v7q4t 1/1 Running 6 (172d ago) 276dpresentation-856b8578cd-gd28p 1/1 Running 5 (172d ago) 276dweb-server 0/1 ContainerCreating 0 11m
candidate@master01:~$ kubectl logs foo |grep "RLIMIT_NOFILE" >> /opt/KUTR00101/foo
13、使用 sidecar 代理容器日志
考题 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
设置配置环境:[candidate@node-1] $ kubectl config use-context k8sContext将一个现有的 Pod 集成到 Kubernetes 的内置日志记录体系结构中(例如 kubectl logs)。添加 streaming sidecar 容器是实现此要求的一种好方法。
Task使用 busybox Image 来将名为 sidecar 的 sidecar 容器添加到现有的 Pod 11-factor-app 中。新的 sidecar 容器必须运行以下命令:/bin/sh -c tail -n+1 -f /var/log/11-factor-app.log使用挂载在/var/log 的 Volume,使日志文件 11-factor-app.log 可用于 sidecar 容器。除了添加所需要的 volume mount 以外,请勿更改现有容器的规格。
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#官方文档https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/
#切换集群环境:candidate@node01:~$ kubectl config use-context k8scandidate@node01:~$ kubectl get pod 11-factor-app -oyaml > counter-pod-streaming-sidecar.yaml
candidate@node01:~$ vim counter-pod-streaming-sidecar.yaml candidate@node01:~$ cat 12.yaml apiVersion: v1kind: Podmetadata: annotations: cni.projectcalico.org/containerID: 5cefd89ee597e0124c6bf425a897c2da037465e9963842d71c289547648ca8ef cni.projectcalico.org/podIP: 10.244.196.180/32 cni.projectcalico.org/podIPs: 10.244.196.180/32 creationTimestamp: "2023-05-20T14:22:47Z" name: 11-factor-app namespace: default resourceVersion: "31840" uid: f6a5a0ef-7485-4801-8d19-d9eb5c3405e8spec: containers: - args: - /bin/sh - -c - | i=0; while true; do echo "$(date) INFO $i" >> /var/log/11-factor-app.log; i=$((i+1)); sleep 1; done image: busybox imagePullPolicy: IfNotPresent name: count resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-rlkbh readOnly: true👇新加内容 *********************** - name: sidecar image: busybox args: [/bin/sh, -c, 'tail -n+1 -F /var/log/11-factor-app.log'] volumeMounts: - name: varlog mountPath: /var/log↑新加内容************************* dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: node01 nodeSelector: kubernetes.io/hostname: node01 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-rlkbh projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace👇新加内容*********************************** - name: varlog emptyDir: {}👆新加内容***********************************status: conditions: - lastProbeTime: null lastTransitionTime: "2023-05-20T14:22:47Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2024-02-22T05:28:27Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2024-02-22T05:28:27Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-05-20T14:22:47Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://0932d0d12dc785d796ca76a21769cb1f48c0b4cb8ad6f77993075f50300f461f image: docker.io/library/busybox:latest imageID: sha256:8135583d97feb82398909c9c97607159e6db2c4ca2c885c0b8f590ee0f9fe90d lastState: terminated: containerID: containerd://2f6b8cdf4055f72c3c9e13f05dac357095d9a5274ae890906993d2b766d07796 exitCode: 255 finishedAt: "2024-02-22T05:27:39Z" reason: Unknown startedAt: "2023-09-01T14:06:51Z" name: count ready: true restartCount: 6 started: true state: running: startedAt: "2024-02-22T05:28:27Z" hostIP: 11.0.1.112 phase: Running podIP: 10.244.196.180 podIPs: - ip: 10.244.196.180 qosClass: BestEffort startTime: "2023-05-20T14:22:47Z"
#删除原来podcandidate@node01:~$ kubectl delete -f counter-pod-streaming-sidecar.yaml
# 检查一下是否删除了candidate@node01:~$ kubectl get pod 11-factor-app
#更新podcandidate@node01:~$ kubectl apply -f counter-pod-streaming-sidecar.yaml
#检查节点# 考试时,仅使用第一条检查一下结果即可kubectl logs 11-factor-app sidecar# kubectl exec 11-factor-app -c sidecar -- tail -f /var/log/11-factor-app.log# kubectl exec 11-factor-app -c count -- tail -f /var/log/11-factor-app.log
candidate@node01:~$ kubectl logs 11-factor-app sidecarSun Feb 25 07:42:29 UTC 2024 INFO 0Sun Feb 25 07:42:30 UTC 2024 INFO 1Sun Feb 25 07:42:31 UTC 2024 INFO 2Sun Feb 25 07:42:32 UTC 2024 INFO 3Sun Feb 25 07:42:33 UTC 2024 INFO 4Sun Feb 25 07:42:34 UTC 2024 INFO 5Sun Feb 25 07:42:35 UTC 2024 INFO 6Sun Feb 25 07:42:36 UTC 2024 INFO 7Sun Feb 25 07:42:37 UTC 2024 INFO 8Sun Feb 25 07:42:38 UTC 2024 INFO 9Sun Feb 25 07:42:39 UTC 2024 INFO 10Sun Feb 25 07:42:40 UTC 2024 INFO 11Sun Feb 25 07:42:41 UTC 2024 INFO 12......
14、升级集群
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context mk8s
Task现有的 Kubernetes 集群正在运行版本 1.28.0。仅将 master 节点上的所有 Kubernetes 控制平面和节点组件升级到版本 1.28.1。
确保在升级之前 drain master 节点,并在升级后 uncordon master 节点。
可以使用以下命令,通过 ssh 连接到 master 节点:ssh master01可以使用以下命令,在该 master 节点上获取更高权限:sudo -i
另外,在主节点上升级 kubelet 和 kubectl。请不要升级工作节点,etcd,container 管理器,CNI 插件, DNS 服务或任何其他插件。
(注意,考试敲命令时,注意要升级的版本,根据题目要求输入具体的升级版本!!!)
考点:如何离线主机,并升级控制面板和升级节点
操作 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
#切换集群环境:[candidate@node-1] $ kubectl config use-context mk8s
#远程到master节点candidate@node01:~$ kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready,SchedulingDisabled control-plane 277d v1.28.0node01 Ready <none> 277d v1.28.0node02 Ready <none> 277d v1.28.0candidate@node01:~$ ssh master01
#使用root权限candidate@master01:~$ sudo -i
#添加tab命令补全功能root@master01:~# source <(kubectl completion bash)
#drain master 节点root@master01:~# kubectl drain master01 --ignore-daemonsets node/master01 cordonedWarning: ignoring DaemonSet-managed Pods: calico-system/calico-node-9d6xh, calico-system/csi-node-driver-jf5j6, kube-system/kube-proxy-m72qjevicting pod calico-system/calico-typha-5f66cb788d-nf2d7pod/calico-typha-5f66cb788d-nf2d7 evictednode/master01 drained
#查看有无更新版本root@master01:~# apt-cache show kubeadm|grep 1.28.1
#安装kubeadm v1.28.1版本root@master01:~# apt install kubeadm=1.28.1-00
#检查版本root@master01:~# kubectl versionClient Version: v1.28.0Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3Server Version: v1.28.0
#升级控制平面master节点,显示success为完成root@master01:~# kubeadm upgrade apply 1.28.1 --etcd-upgrade=false
........
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.1". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
.......
#升级组件kubectl、kubelet版本root@master01:~# apt-get install kubectl=1.28.1-00 kubelet=1.28.1-00
root@master01:~# kubectl versionClient Version: v1.28.1Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3Server Version: v1.28.1
root@master01:~# kubelet --versionKubernetes v1.28.1
#检查root@master01:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready,SchedulingDisabled control-plane 278d v1.28.1node01 Ready <none> 277d v1.28.0node02 Ready <none> 277d v1.28.0
#uncordon master01恢复成可调度状态root@master01:~# kubectl uncordon master01 node/master01 uncordoned
#查看节点STATUSroot@master01:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready control-plane 278d v1.28.1node01 Ready <none> 277d v1.28.0node02 Ready <none> 277d v1.28.0
15、ETCD备份恢复
考题
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境此项目无需更改配置环境。但是,在执行此项目之前,请确保您已返回初始节点。[candidate@master01] $ exit #注意,这个之前是在 master01 上,所以要 exit 退到 node01,如果已经是 node01 了,就不要再 exit 了。
Task首先,为运行在 https://11.0.1.111:2379 上的现有 etcd 实例创建快照并将快照保存到 /var/lib/backup/etcd-snapshot.db(注意,真实考试中,这里写的是 https://127.0.0.1:2379)
为给定实例创建快照预计能在几秒钟内完成。 如果该操作似乎挂起,则命令可能有问题。用 CTRL + C 来取消操作,然后重试。然后还原位于/data/backup/etcd-snapshot-previous.db 的现有先前快照。
提供了以下 TLS 证书和密钥,以通过 etcdctl 连接到服务器。CA 证书: /opt/KUIN00601/ca.crt客户端证书: /opt/KUIN00601/etcd-client.crt客户端密钥: /opt/KUIN00601/etcd-client.key
注:请务必注意,模拟环境里的题目没有任何要求,所以默认在 node01 操作。但如果考试时,考题里明确要求你,要到 master 上做。则应该先 ssh master01,然后再继续在 master 上敲命令做题,做完后,记得要 exit 退回到 node01 上。
考点:etcd 的备份和还原命令
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#特别注意,etcd 这道题,在考试时做完后,就不要回头检查或者操作了。因为它是用的前一道题的集群,所以一旦你回头再做时,切换错集群了,且又将etcd 还原了,反而可能影响别的考题。但如果事后回来做这道题的话,切记要切换为正确的集群。kubectl config use-context xxxx
#必做操作,要事先切换etcd版本3candidate@node01:~$ export ETCDCTL_API=3
#备份操作candidate@node01:~$ etcdctl --endpoints=https://11.0.1.111:2379 --cacert="/opt/KUIN00601/ca.crt" --cert="/opt/KUIN00601/etcd-client.crt" --key="/opt/KUIN00601/etcd-client.key" snapshot save /var/lib/backup/etcd-snapshot.db{"level":"info","ts":"2024-02-23T14:54:17.332+0800","caller":"snapshot/v3_snapshot.go:65","msg":"created temporary db file","path":"/var/lib/backup/etcd-snapshot.db.part"}{"level":"info","ts":"2024-02-23T14:54:17.343+0800","logger":"client","caller":"v3@v3.5.7/maintenance.go:212","msg":"opened snapshot stream; downloading"}{"level":"info","ts":"2024-02-23T14:54:17.343+0800","caller":"snapshot/v3_snapshot.go:73","msg":"fetching snapshot","endpoint":"https://11.0.1.111:2379"}{"level":"info","ts":"2024-02-23T14:54:17.537+0800","logger":"client","caller":"v3@v3.5.7/maintenance.go:220","msg":"completed snapshot read; closing"}{"level":"info","ts":"2024-02-23T14:54:17.556+0800","caller":"snapshot/v3_snapshot.go:88","msg":"fetched snapshot","endpoint":"https://11.0.1.111:2379","size":"14 MB","took":"now"}{"level":"info","ts":"2024-02-23T14:54:17.556+0800","caller":"snapshot/v3_snapshot.go:97","msg":"saved","path":"/var/lib/backup/etcd-snapshot.db"}Snapshot saved at /var/lib/backup/etcd-snapshot.db
#检查candidate@node01:~$ etcdctl snapshot status /var/lib/backup/etcd-snapshot.db -wtableDeprecated: Use `etcdutl snapshot status` instead.
+---------+----------+------------+------------+| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |+---------+----------+------------+------------+| 2a79b57 | 119008 | 1814 | 14 MB |+---------+----------+------------+------------+
#恢复, 考试时,/data/backup/etcd-snapshot-previous.db 的权限应该是只有 root 可读,所以需要使用 sudo 命令。candidate@node01:~$ etcdctl snapshot restore /data/backup/etcd-snapshot-previous.db
......
Error: open /data/backup/etcd-snapshot-previous.db: permission denied #出现错误提示权限不足,添加sudo提高权限
candidate@node01:~$ sudo etcdctl snapshot restore /data/backup/etcd-snapshot-previous.dbDeprecated: Use `etcdutl snapshot restore` instead.
16、排查集群中故障节点
考题 ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line
设置配置环境:[candidate@node-1] $ kubectl config use-context wk8s
Task名为 node02 的 Kubernetes worker node 处于 NotReady 状态。调查发生这种情况的原因,并采取相应的措施将 node 恢复为 Ready 状态,确保所做的任何更改永久生效。
可以使用以下命令,通过 ssh 连接到 node02 节点:ssh node02可以使用以下命令,在该节点上获取更高权限:sudo -i
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#考试时务必执行,切换集群。candidate@node01:~$ kubectl config use-context wk8s
#查看现有节点状态,node2 NoReadycandidate@node01:~$ kubectl get nodesNAME STATUS ROLES AGE VERSIONmaster01 Ready control-plane 278d v1.28.1node01 Ready <none> 278d v1.28.0node02 NotReady <none> 278d v1.28.0
#ssh到node02故障节点,# ssh 到 node02 节点,并切换到 root 下,sudo -i到root权限或者执行命令前加sudocandidate@node01:~$ ssh node02#查看节点服务状态,为loadedcandidate@node02:~$ systemctl status kubelet.service ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; disabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: inactive (dead) since Fri 2024-02-23 15:20:00 CST; 2min 9s ago Docs: https://kubernetes.io/docs/home/ Main PID: 1497 (code=exited, status=0/SUCCESS)
Feb 23 15:18:16 node02 kubelet[1497]: E0223 15:18:16.571500 1497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have >Feb 23 15:18:29 node02 kubelet[1497]: E0223 15:18:29.571427 1497 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have >
.....
#启动kubelet服务并设置开机自启candidate@node02:~$ sudo systemctl start kubelet.servicecandidate@node02:~$ sudo systemctl enable kubelet.service
#切记回退到node1初始节点,如若使用sudo -i的方式提升权限则需要exit两次,一次回退到普通权限,再次回退到node1candidate@node02:~$ exitlogoutConnection to node02 closed.candidate@node01:~$
17、节点维护
考题
ounter(lineounter(lineounter(lineounter(lineounter(line设置配置环境:[candidate@node-1] $ kubectl config use-context ek8s
Task将名为 node02 的 node 设置为不可用,并重新调度该 node 上所有运行的 pods。
考点:cordon 和 drain 命令的使用
操作
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line#考试时务必执行,切换集群。candidate@node01:~$ kubectl config use-context ek8s
#根据题目要求drain相应的node节点,(drain执行逻辑是先执行cordon再执行drain,如若只是设置node不可调度,只需执行cordon即可)candidate@node01:~$ kubectl drain node02 --ignore-daemonsetsnode/node02 already cordonedWarning: ignoring DaemonSet-managed Pods: calico-system/calico-node-9wv7w, calico-system/csi-node-driver-rsr2w, ingress-nginx/ingress-nginx-controller-48m8c, kube-system/kube-proxy-4hrk8evicting pod cpu-top/test0-784f495b5c-2dqdvevicting pod kube-system/coredns-7bdc4cb885-5hw46evicting pod calico-apiserver/calico-apiserver-866cccf79f-vmbcbevicting pod calico-system/calico-kube-controllers-789dc4c76b-vbm6vevicting pod cpu-top/redis-test-5db498bbd-h2mfjevicting pod default/front-end-cfdfbdb95-v7q4tevicting pod default/presentation-856b8578cd-gd28pevicting pod ing-internal/hello-77766974fd-khpjvpod/redis-test-5db498bbd-h2mfj evictedpod/presentation-856b8578cd-gd28p evictedpod/test0-784f495b5c-2dqdv evictedI0223 15:40:41.781812 274890 request.go:697] Waited for 1.088016422s due to client-side throttling, not priority and fairness, request: GET:https://11.0.1.111:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-866cccf79f-vmbcbpod/calico-apiserver-866cccf79f-vmbcb evictedpod/calico-kube-controllers-789dc4c76b-vbm6v evictedpod/front-end-cfdfbdb95-v7q4t evictedpod/hello-77766974fd-khpjv evictedpod/coredns-7bdc4cb885-5hw46 evictednode/node02 drained
#检查node2已经显示不可调度candidate@node01:~$ kubectl get nodes NAME STATUS ROLES AGE VERSIONmaster01 Ready control-plane 278d v1.28.1node01 Ready <none> 278d v1.28.0node02 Ready,SchedulingDisabled <none> 278d v1.28.0
#正常已经没有 pod 在 node02 上了。但测试环境里的 ingress、calico、kube-proxy 是 daemonsets 模式的,所以显示还在 node02 上,忽略即可。candidate@node01:~$ kubectl get pod -A -o wide|grep node02calico-system calico-node-9wv7w 1/1 Running 8 (27h ago) 278d 11.0.1.113 node02 <none> <none>calico-system csi-node-driver-rsr2w 2/2 Running 16 (27h ago) 278d 10.244.140.127 node02 <none> <none>ingress-nginx ingress-nginx-controller-48m8c 1/1 Running 6 (27h ago) 278d 11.0.1.113 node02 <none> <none>kube-system kube-proxy-4hrk8 1/1 Running 0 22h 11.0.1.113 node02 <none> <none>
参考网址:
ounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(lineounter(line3、配置网络策略 NetworkPolicy依次点击 Concepts → Services, Load Balancing, and Networking → Network Policies(看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/concepts/services-networking/network-policies/
4、暴露服务 service依次点击 Concepts → Workloads → Workload Resources → Deployments(看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/concepts/workloads/controllers/deployment/
5、创建 Ingress依次点击 Concepts → Services, Load Balancing, and Networking → Ingress (看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/concepts/services-networking/ingress/
7、调度 pod 到指定节点 (kubectl run)依次点击 Tasks → Configure Pods and Containers → Assign Pods to Nodes (看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/assign-pods-nodes/
9、创建多容器的 pod (kubectl run)依次点击 Concepts → Workloads → Pods (看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/concepts/workloads/pods/
10、创建 PV依次点击 Tasks → Configure Pods and Containers → Configure a Pod to Use a PersistentVolume for Storage (看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
11、创建 PVC依次点击 Tasks → Configure Pods and Containers → Configure a Pod to Use a PersistentVolume for Storage (看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/configure-persistent-volume-storage/
13、使用 sidecar 代理容器日志依次点击 Concepts → Cluster Administration → Logging Architecture (看不懂英文的,可右上角翻译成中文)https://kubernetes.io/zh-cn/docs/concepts/cluster-administration/logging/
更新:
ounter(lineounter(lineounter(lineounter(lineCKA注意:1、etcd那道题,考试时,题干里明确要求你切到master操作,而不是在node01操作了,所以你要ssh master-xxx节点操作,sudo -i切到root下后,再执行。
2、集群升级那道题,目前考试的版本是1.29.0,要求你升级到1.29.1。你需要apt-cache show kubeadm |grep 1.29,你会发现新版本是kubeadm=1.29.1-1.1,最后是-1.1而不是-00,所以这里要注意。另外一个点,就是升级完成后,在master节点加一步,systemctl restart kubelet。