03.Kubernetes自动化部署和namespace、pod

2026-04-20

复习和预习

昨天课堂内容

  1. Container 网络
  2. Container 存储
  3. Container 命名空间
  4. Kubernetes 介绍
  5. Kubernetes 安装

课前复习

  • Kubernetes自动化部署

今天课堂内容

  1. Node and Cluster
  2. Namespace and Contexts
  3. Pod Basic

配置专用管理节点

以Ubuntu 2404 模板机为例。

bash 复制代码
# 配置仓库 
[root@ubuntu2404 ~ 10:18:44]#  curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
[root@ubuntu2404 ~ 10:20:16]# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/deb/ /" > /etc/apt/sources.list.d/kubernetes.list

# 安装 kubectl
[root@ubuntu2404 ~ 10:20:20]# apt update && apt install -y kubectl=1.30.2.1-1.1

# 配置凭据
[root@ubuntu2404 ~ 10:21:14]# mkdir .kube
[root@ubuntu2404 ~ 10:21:19]# scp root@10.1.8.30:/etc/kubernetes/admin.conf .kube/config

# 验证
[root@ubuntu2404 ~ 10:25:13]# kubectl --kubeconfig .kube/config get nodes
NAME                   STATUS   ROLES           AGE     VERSION
master30.laoma.cloud   Ready    control-plane   2d18h   v1.30.2
worker31.laoma.cloud   Ready    <none>          2d17h   v1.30.2
worker32.laoma.cloud   Ready    <none>          2d17h   v1.30.2

如果有多个集群,可以使用命令别名管理不同集群

bash 复制代码
scp 10.1.8.30:/etc/kubernetes/admin.conf .kube/cluster1.config
scp 10.1.8.40:/etc/kubernetes/admin.conf .kube/cluster2.config

kubectl get nodes --kubeconfig .kube/cluster1.config 
kubectl get nodes --kubeconfig .kube/cluster2.config 

alias kubectl-1='kubectl --kubeconfig .kube/cluster1.config'
alias kubectl-2='kubectl --kubeconfig .kube/cluster2.config'

kubectl-1 get nodes
kubectl-2 get nodes

Node and Cluster

学习参考:Node

查看节点

bash 复制代码
# 查看节点清单
[root@master30 ~]# kubectl get nodes
NAME                  STATUS   ROLES           AGE   VERSION
master30.laoma.cloud   Ready    control-plane   37h   v1.30.2
worker31.laoma.cloud   Ready    <none>          36h   v1.30.2
worker32.laoma.cloud   Ready    <none>          36h   v1.30.2

# 查看单个节点
[root@master30 ~]# kubectl get node master30.laoma.cloud 
NAME                   STATUS   ROLES           AGE     VERSION
master30.laoma.cloud   Ready    control-plane   2d19h   v1.30.2

# 查看node配置(资源定义)
[root@master30 ~]# kubectl get node master30.laoma.cloud -o yaml


# 查看特定节点详细信息
[root@master30 ~]# kubectl describe node master30.laoma.cloud

删除节点

以 worker31 节点为例。

删除节点之前,确保节点上没有应用在运行。

bash 复制代码
# 将节点上的pod驱逐走,并设置节点为维护模式
[root@master30 ~]# kubectl drain worker31.laoma.cloud --ignore-daemonsets
node/worker31.laoma.cloud cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/calico-node-v8jdn, kube-system/kube-proxy-27vl2
evicting pod kube-system/calico-kube-controllers-7cb4fd5784-jx2xl
pod/calico-kube-controllers-7cb4fd5784-jx2xl evicted
node/worker31.laoma.cloud drained

[root@master30 ~]# kubectl get nodes
NAME                  STATUS                     ROLES           AGE   VERSION
master30.laoma.cloud   Ready                      control-plane   41h   v1.30.2
worker31.laoma.cloud   Ready,SchedulingDisabled   <none>          41h   v1.30.2
worker32.laoma.cloud   Ready                      <none>          41h   v1.30.2

# 删除 worker31 节点
[root@master30 ~]# kubectl delete node worker31.laoma.cloud
node "worker31.laoma.cloud" deleted
[root@master30 ~]# kubectl get nodes
NAME                  STATUS   ROLES           AGE   VERSION
master30.laoma.cloud   Ready    control-plane   41h   v1.30.2
worker32.laoma.cloud   Ready    <none>          41h   v1.30.2

# 重置删除的 worker31 节点
[root@worker31 ~]# kubeadm reset -f
[preflight] Running pre-flight checks
W1019 07:37:44.242023    7660 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

删除集群

删除集群流程:

  1. 删除所有 node
  2. 删除所有 master

具体步骤:

  1. 删除所有node节点
bash 复制代码
[root@master30 ~]# kubectl drain worker31.laoma.cloud --ignore-daemonsets --force
[root@master30 ~]# kubectl drain worker32.laoma.cloud --ignore-daemonsets --force
[root@master30 ~]# kubectl delete node worker31.laoma.cloud worker32.laoma.cloud

# 重置节点,注意执行位置
[root@worker31 ~]# kubeadm reset -f
[root@worker32 ~]# kubeadm reset -f
  1. 删除master节点
bash 复制代码
# 删除集群前获取集群配置
[root@master30 ~]# kubectl get cm kubeadm-config -n kube-system -o yaml > kubeadm.yml

# 修改kubeadm.yml内容如下:
[root@master30 ~]# vim kubeadm.yml
# 删除1-3和22-28行,效果如下
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: v1.30.2
networking:
  dnsDomain: cluster.local
  podSubnet: 10.224.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

[root@master30 ~]# kubectl delete node master30.laoma.cloud
[root@master30 ~]# kubeadm reset -f
[root@master30 ~]# rm -fr .kube/

重建集群

bash 复制代码
# 初始化集群
[root@master30 ~]# kubeadm init --config kubeadm.yml

# 也可以使用之前的命令
[root@master30 ~]# kubeadm init --kubernetes-version=v1.30.2 --pod-network-cidr=10.224.0.0/16

# 配置凭据
[root@master30 ~]# mkdir -p $HOME/.kube
[root@master30 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master30 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 配置网络
[root@master30 ~]# kubectl apply -f calico.yaml

# 加入集群
[root@worker31 ~]# kubeadm join 10.1.8.30:6443 --token ky95b9.sjg0fn21pdi1m0xz    --discovery-token-ca-cert-hash sha256xxxxxxx
[root@worker32 ~]# kubeadm join 10.1.8.30:6443 --token ky95b9.sjg0fn21pdi1m0xz    --discovery-token-ca-cert-hash sha256xxxxxxx

Namespace and Contexts

Namespace 介绍

**问题:**多个用户使用同一个Kubernetes Cluster, 如何将他们创建的资源隔离开呢?

答案:Namespace ,简写ns,也称之为 project,代表资源集合,用于分组集群资源。Kubernetes 使用 Namespace 可以将一个物理的 Cluster 逻辑上划分成多个资源集合, 每个集合就是一个Namespace。 不同Namespace 里的资源是完全隔离的。

Kubernetes 默认创建以下Namespace:

  • default: 创建资源时如果不指定Namespace, 将被放到这个Namespace中。
  • kube-system: Kubernetes 自己创建的系统资源将放到这个Namespace中。
  • kube-public:该命名空间中所有对象可以被所有用户(包括未验证身份的用户)读取。
  • kube-node-lease:该命名空间含有与每个节点关联的Lease对象。节点租用允许kubelet发送heartbeat(心跳),以便控制平面能检测节点故障。

Namespace 管理

bash 复制代码
# 查看Namespace清单
[root@master30 ~]# kubectl get ns
NAME              STATUS   AGE
default           Active   7d4h
kube-node-lease   Active   7d4h
kube-public       Active   7d4h
kube-system       Active   7d4h

# 获取Namespace资源yaml格式定义文件
[root@master30 ~]# kubectl get ns default -o yaml

# 创建Namespace
[root@master30 ~]# kubectl create ns laoma
# 注意:命名空间名称满足正则表达式[a-z0-9]([-a-z0-9]*[a-z0-9])?,最大长度为63位

# 直接编辑Namespace
[root@master30 ~]# kubectl edit ns laoma

# 查看Namespace详细信息
[root@master30 ~]# kubectl describe ns laoma
Name:         laoma
Labels:       <none>
Annotations:  <none>
Status:       Active

No resource quota.

No LimitRange resource.

# 删除Namespace
[root@master30 ~]# kubectl delete ns laoma
namespace "laoma" deleted
# 注意:
# 删除一个namespace会自动删除该namespace中所有资源。
# default和kube-system命名空间不可删除。

Namespace 还可以通过 yaml 文件创建。

yaml 复制代码
# ns-laoma.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: laoma
bash 复制代码
[root@master30 ~]# kubectl apply -f ns-laoma.yaml 
namespace/laoma created

操作特定ns中对象,需要使用选项-n指定ns,例如:

bash 复制代码
[root@master30 ~]# kubectl run web --image=nginx -n laoma
pod/web created

[root@master30 ~]# kubectl get pod -n laoma -o wide
NAME   READY   STATUS    RESTARTS   AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
web    1/1     Running   0          39s   10.224.113.129   worker32.laoma.cloud   <none>           <none>

[root@master30 ~]# curl -s 10.224.113.129  | grep nginx
<title>Welcome to nginx!</title>
<h1>Welcome to nginx!</h1>
<p>If you see this page, nginx is successfully installed and working.
<a href="https://nginx.org/">nginx.org</a>.<br/>
<a href="https://community.nginx.org/">community.nginx.org</a>.<br/>
<a href="https://f5.com/nginx">f5.com/nginx</a>.</p>
<p><em>Thank you for using nginx.</em></p>

Namespace 切换

kubernetes 自带工具

查看当前所在ns

bash 复制代码
[root@master30 ~]# kubectl config get-contexts 
CURRENT   NAME                          CLUSTER      AUTHINFO          NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin

此时NAMESPACE列对应的属性为空,不属于任何NAMESPACE

设置默认ns

bash 复制代码
[root@master30 ~]# kubectl config set-context --current --namespace=laoma
Context "kubernetes-admin@kubernetes" modified.

[root@master30 ~]# kubectl config get-contexts 
CURRENT   NAME                          CLUSTER      AUTHINFO        NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   laoma

此时,NAMESPACE 列对应的属性为 laoma。

kubens-第三方工具

kubens 安装

bash 复制代码
[root@master30 ~]# wget https://codeload.github.com/ahmetb/kubectx/zip/refs/heads/master -O kubectx.zip
[root@master30 ~]# unzip kubectx.zip
[root@master30 ~]# ls kubectx-master/
CONTRIBUTING.md  README.md  completion  go.sum  internal  kubens
LICENSE          cmd        go.mod      img     kubectx   test
[root@master30 ~]# cp kubectx-master/kubens /usr/local/bin/
[root@master30 ~]# chmod +x /usr/local/bin/kubens

# 配置补全
[root@master30 ~]# cp kubectx-master/completion/kubens.bash /etc/bash_completion.d/
[root@master30 ~]# source /etc/bash_completion.d/kubens.bash

kubens 命令使用

bash 复制代码
[root@master30 ~]# kubens -h
USAGE:
  kubens                    : list the namespaces in the current context
  kubens <NAME>             : change the active namespace of current context
  kubens -                  : switch to the previous namespace in this context
  kubens -c, --current      : show the current namespace
  kubens -h,--help          : show this message

[root@master30 ~]# kubens 
default
kube-node-lease
kube-public
kube-system
laoma

[root@master30 ~]# kubens kube-system
[root@master30 ~]# kubectl config get-contexts

CURRENT   NAME                          CLUSTER      AUTHINFO           NAMESPACE
*         kubernetes-admin@kubernetes   kubernetes   kubernetes-admin   kube-syste

[root@master30 ~]# kubectl get pods
NAME                                          READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7cb4fd5784-dqv6n      1/1     Running   0          19m
calico-node-5k2bp                             1/1     Running   0          18m
calico-node-6vpt6                             1/1     Running   0          19m
calico-node-7x82d                             1/1     Running   0          19m
coredns-66f779496c-h89xx                      1/1     Running   0          19m
coredns-66f779496c-zqqxd                      1/1     Running   0          19m
etcd-master30.laoma.cloud                      1/1     Running   2          19m
kube-apiserver-master30.laoma.cloud            1/1     Running   2          19m
kube-controller-manager-master30.laoma.cloud   1/1     Running   2          19m
kube-proxy-7l9pr                              1/1     Running   0          19m
kube-proxy-9djwp                              1/1     Running   0          19m
kube-proxy-n2bk9                              1/1     Running   0          18m
kube-scheduler-master30.laoma.cloud            1/1     Running   2          19m

自定义脚本

bash 复制代码
#!/bin/bash
function usage(){
  echo "$0 [namespace]
1. 没有namespace参数时,显示namespace清单
2. namespace必须是集群中存在的namespace"
}

# 获取当前namespace
namespace=$(kubectl config get-contexts | awk '/\*/{print $5}')
if [ -z "$namespace" ];then
  namespace=default
fi

# 显示 namespace清单
if [ $# -eq 0 ];then
  kubectl get ns | awk '{print "  "$1}' | sed "s/^  $namespace/\* $namespace/"
  exit 1
fi

# 切换 namespace
if [ $# -eq 1 ];then
  if kubectl get ns | awk '$1 !~ "NAME" {print $1}' | grep -qx "$1" &>/dev/null;then
    kubectl config set-context --current --namespace $1 > /dev/null
    echo "当前namespace设置为:$1"
  else
    usage  
  fi
fi

Cluster 切换

~/.kube/config 详解

单集群配置

获取凭据文件内容(不显示敏感信息):

bash 复制代码
[root@master30 ~]# kubectl config view
yaml 复制代码
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.1.8.30:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    namespace: kube-system
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

实际配置文件示例:

yaml 复制代码
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeU1EUXdNakExTURZd05Wb1hEVE15TURNek1EQTFNRFl3TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHg5ClZnNGUyR3dUTkVUaUowTjBqV09hZ2NvQVRnclFzVXVjWU1Jb281YWFZZGtTZGY2amdJanhTeEtNTmZyL3MrQWQKems5S25CL2UwS0ZxSEJwOEhsQy9VZ0JTK1V2THl6b2VuWVNFM29MWmJzdDhwY1JXL0ZWQVdQckRkS1VhcTdJcApyVXpvQlRTSHRHSEluNitLR1htZ3BwWkdLK3pRT2pjTWYyT2IxOGZ0NDZDbXNFT24rMmhKZk13eHBUd3k0dEgvCktteWhnVStYUys1c0Z5L0N2R1gwbUEzbWpzeHZ4VnFmamQ0T2F0RVVhamN5aERVc3FxRVVBUmNWZGdpbkhUc2IKdWVhc3NieUFZSGRJWGFiMjQvVzg0ZXpDYk8vMGhvcHhZZFhGVDYxQVc3R2RNL0IvNlYyU0hDWUFCd2FKU1dmNApZWHFpajJoMndUNDdYZHZscHNzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZLWlJpc2FLS1c4YjFFcUlHeHhTcTkrdUdEWFNNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBRjhHSjI1c1RqVEd3RDRoSDVmegp5bXU2WGhSd212ejBzemNKUTVKVVlsSndpUlJ0NUVBZ2lNUHZ0MndIeGt1UExGWDAwRFZ2MWorQkEzaEt5Wk5GClVaQUdrRmVwK0JyOWZYUWxVSE15OHZTZjFZSVhBRk43MFdBT0dFdVZjWEJDeDFNbWlBTndObUQyck1LdmFJK1cKci80Y1hMdXdVZlNUaUZaeFpaVGdaRkFtSFJ0QnVvSmpoaStmQjRPT2xTRFNEMU9PaFZjVVc3bWo3YkhJUjN1cgpzRnFYakp5eEc0T0JadlM5MTBXMUNtYTJWUTFCYnprY1h0YzJZOUdDSS9Cd3h2a0labWFSYy9ETFY1M0YxSmxXCnluNmovZEpvdmd6SGZqaVhheUxoekpaUDhMaG9ULzFacFQ2K214Uml0SHVMM2RVRW9oL2ttSzZqclBjZlFUWlMKTnRrPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    server: https://10.1.8.30:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYm9pOWR1alB4dEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBME1ESXdOVEEyTURWYUZ3MHlNekEwTURJd05UQTJNRGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhSbWh6MkpjWG83RmNIb1QKQ0d6dHkyRGplM1FvYVl1djVMMmV2V3NpMk14SmZRLzVnRWg4cDdXZEdTQ0NDL0t5a2Fvbk9ZWFh3MkxEVi9yVQpkbEtNbUtTN2s1eG5WWWV1M0EyZGlCMTU1MVVFK0lTelVPTE54c3J0MjdVajRuMDliUU5kSzhjeEZvclhxVUV0ClppVTdHY0dOZ3hRUVA4MkhGSDdTb09RTXRtTmo0bXdMWHVZbEFWODZ5TXl5RThQYjF5Y0FoQjhVY1BFUmRUeWQKeEtUY3FVbG0xYnQ4cU1CRzlYWUZFNk1IUUpldXk0Rnk3NjlYL2ZJNk9SZCtuUGdVWHVVd0FUYVdYR2JIeWhoUApDS0dKWXdzY09PRUFmWnRWV3RYUkxiM1pEWWpiVmFqTzllT0ZHYnorcjRxVDR2NEpkL1dZZ1RLcmUvTDBzVlhRCkROZWZDd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTbVVZckdpaWx2RzlSS2lCc2NVcXZmcmhnMQowakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBckt4cjhFbEtYN2FYWUMvdzZhUFg4M3JvRHl0Q0NudTBTMHpWCm5WTVFnMUsvUmhRSGV5TnY2SGtycTJXSFhoUjh1L2M3Ukw3WGwyeDlzcWhPWmR3cGRrUlBXK0RENEs4TFMwYjAKUnhwdGxraW44UFE5QkZEOExuNDlmdmNhRG9LT0lhV2djZFFVMFRKQVZoZ1RNQ3FZZ0ZNTUhQa3I2d2Q2WVFtMgp3KzhsVlNCNytmeERKVlhkeUZWMFdOdk5Jb1JrM0FCcGEyOVVVZlZoY1FnZGI2M1RFamgwUlI0d014bjhhY2E1CnNiaE1wVmZiT2ZUU0pRTUJGRkRkMG5DWnJSN3NxQWluOUE2M3RydWZLTzJMTEF0YUVJeDRMUzJWZ09veG5odWEKQlh6UGp4SHF5TTR1WFRqQVlOMk44K2wxWXRBeSs0VTg2ZzVEMEZZeUpSTE5NMU5VL2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeFJtaHoySmNYbzdGY0hvVENHenR5MkRqZTNRb2FZdXY1TDJldldzaTJNeEpmUS81CmdFaDhwN1dkR1NDQ0MvS3lrYW9uT1lYWHcyTERWL3JVZGxLTW1LUzdrNXhuVllldTNBMmRpQjE1NTFVRStJU3oKVU9MTnhzcnQyN1VqNG4wOWJRTmRLOGN4Rm9yWHFVRXRaaVU3R2NHTmd4UVFQODJIRkg3U29PUU10bU5qNG13TApYdVlsQVY4NnlNeXlFOFBiMXljQWhCOFVjUEVSZFR5ZHhLVGNxVWxtMWJ0OHFNQkc5WFlGRTZNSFFKZXV5NEZ5Cjc2OVgvZkk2T1JkK25QZ1VYdVV3QVRhV1hHYkh5aGhQQ0tHSll3c2NPT0VBZlp0Vld0WFJMYjNaRFlqYlZhak8KOWVPRkdieityNHFUNHY0SmQvV1lnVEtyZS9MMHNWWFFETmVmQ3dJREFRQUJBb0lCQUNkS0hMOUNWRGRsTG1abApielhXd1BBeHVDYjcyTEp4YmZhaTllbThXWTN0NnhoSy91bGJpYjNFcmpROERyQmpDTVdRclpFQjVTakZuenNDCmZTZTQvTjNRdUxPTUVlMHl4dUNHdGtoVDErRU5TWmhnbTM0Y04valFxdW1KQ2tZendQTGlJTWlCUkgvQjNZdVgKdW4wS0h1WGJkMklSdGN1Q0pOTXBGTU9Oc2hzSkMyRTI1cmhXazlZZFdKTVVQV2JjMGZUd0RDdHhVSXpRNWhycApOa0tGenpBekdiVFA2dXA2dWJrYzQ4Zm9BNDVTRUJLNkhhcFZBNEQ1NzNQNFNmZWRodmhMVG9lbmQzM3NmeTJYCjlkS1dFOUg4VUQ3VGFRc3N6MG8xSU1xK0xVVU5UNUVqRkVCOXVtemIxTHB4YW1wUFJqVVpXU2ZEcXJDbGlvMkUKWHFaWkd3RUNnWUVBMm5Tbmd2R0p2NUZhdFVQeGppbWMyNHhOWjZSTWhRRFc3OG9TRUYrUUVoV2RscGlwTk94QQpVNTJNY1NrSmNMR1Jkd3VIaWdjQXVWVW90N09sak9MU1Zzak4yN0xRQTFraTNXbjYxeWVKUSs5MVpZMW9GaUJQCmpBZWJoS1BJanFPT2p3YUsrMEJ5cGFJb2dTb215dzErNklrSE5kNmJkc2FQbG1aMU9wc0hjRHNDZ1lFQTV2bG0KOGNvUUlEWWx5Q0x3YzMzSWRHd0xsWU9KUmFNdCs5QXRNY1VaZGZZN2gvaUlWTE4wSEZkUUFkOWxLUUxyNlpZVApuS0lWcUdzU1BiM2kyMCsyZHNVN3VPUjNOVkZCeHcvZUVVM1FnTjFtVTRnZzBLNnpORUpkKzZaYVo2cUQyVVhRCnY3cTdiOHhzekJoUkNMakdjb1huL2ZnWFBIWmRDUE9VVUVIWTczRUNnWUVBek1RSHVDK2JoSnRFd1IvY3JmckgKY3V1Q0txSFFyK0xubFlCOWlpZHBMZXBnK3FaQ0JMOW1WSG9iQ0g4RXdFTlJMSnI4QXg4cFNJOVFTVkQwM3FoRgpyTjh3UnJ6SFNqd2srQkc4OUN1MCtKN2VGY0NFVGlrZkp3eUNjOFBwMi9ublNKMURiTnN1RzU5eUJCQjBxR1FRCkR2dFNiT1lxSngxYnZnaHYzZTB1L2IwQ2dZRUF4NHc1TURQT2NzWFZKbTlwSlo1S0RLczc1dFJaU0Z5T1lidWQKRUI2a3ZKRWJKWUhHNXNhVFRkanhPbXp5VE5oRlVPMWp6RE1NV3hFR0ZXbDBFTjF4V25OVUFZMEFvSU92UEhlcwo5MjR1OE9aV2ZWeGlYV2hSVXBqejhYSHJNUnpVQkdhWXpzeFpHMkdWclU1azFCQXZBc3BGZjlsUzJkMjR5djhGCjU4QzcxMEVDZ1lBV0p4NmpuV05vbGc1QzhvSGpDT0Jkb0RGdHhGeEpiT2M5QUJ2aHpZbGlTSHp5aTY3RjVQZ28KREdaWXJjWWZxYVY5MVl5R24xaWQybG5GcVZoZ1VGYnpGS2t4U0FaS0FJVDRwbGNucVRWQm9oc1UwMkxqSUdEZwpESU5SQW8xb0dqRTdOM3FWUVFzS0huSDBnUXNwWVZpN3NFRzRDdWtZSDQ2MUNCM0dET0N0b3c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

说明:

  • certificate-authority-data 指定CA证书,通过base64编码。
bash 复制代码
[root@master30 ~]# echo LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYm9pOWR1alB4dEF3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TWpBME1ESXdOVEEyTURWYUZ3MHlNekEwTURJd05UQTJNRGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhSbWh6MkpjWG83RmNIb1QKQ0d6dHkyRGplM1FvYVl1djVMMmV2V3NpMk14SmZRLzVnRWg4cDdXZEdTQ0NDL0t5a2Fvbk9ZWFh3MkxEVi9yVQpkbEtNbUtTN2s1eG5WWWV1M0EyZGlCMTU1MVVFK0lTelVPTE54c3J0MjdVajRuMDliUU5kSzhjeEZvclhxVUV0ClppVTdHY0dOZ3hRUVA4MkhGSDdTb09RTXRtTmo0bXdMWHVZbEFWODZ5TXl5RThQYjF5Y0FoQjhVY1BFUmRUeWQKeEtUY3FVbG0xYnQ4cU1CRzlYWUZFNk1IUUpldXk0Rnk3NjlYL2ZJNk9SZCtuUGdVWHVVd0FUYVdYR2JIeWhoUApDS0dKWXdzY09PRUFmWnRWV3RYUkxiM1pEWWpiVmFqTzllT0ZHYnorcjRxVDR2NEpkL1dZZ1RLcmUvTDBzVlhRCkROZWZDd0lEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JTbVVZckdpaWx2RzlSS2lCc2NVcXZmcmhnMQowakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBckt4cjhFbEtYN2FYWUMvdzZhUFg4M3JvRHl0Q0NudTBTMHpWCm5WTVFnMUsvUmhRSGV5TnY2SGtycTJXSFhoUjh1L2M3Ukw3WGwyeDlzcWhPWmR3cGRrUlBXK0RENEs4TFMwYjAKUnhwdGxraW44UFE5QkZEOExuNDlmdmNhRG9LT0lhV2djZFFVMFRKQVZoZ1RNQ3FZZ0ZNTUhQa3I2d2Q2WVFtMgp3KzhsVlNCNytmeERKVlhkeUZWMFdOdk5Jb1JrM0FCcGEyOVVVZlZoY1FnZGI2M1RFamgwUlI0d014bjhhY2E1CnNiaE1wVmZiT2ZUU0pRTUJGRkRkMG5DWnJSN3NxQWluOUE2M3RydWZLTzJMTEF0YUVJeDRMUzJWZ09veG5odWEKQlh6UGp4SHF5TTR1WFRqQVlOMk44K2wxWXRBeSs0VTg2ZzVEMEZZeUpSTE5NMU5VL2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== | base64 -d
-----BEGIN CERTIFICATE-----
MIIDITCCAgmgAwIBAgIIboi9dujPxtAwDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yMjA0MDIwNTA2MDVaFw0yMzA0MDIwNTA2MDdaMDQx
FzAVBgNVBAoTDnN5c3RlbTptYXN0ZXJzMRkwFwYDVQQDExBrdWJlcm5ldGVzLWFk
bWluMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAxRmhz2JcXo7FcHoT
CGzty2Dje3QoaYuv5L2evWsi2MxJfQ/5gEh8p7WdGSCCC/KykaonOYXXw2LDV/rU
dlKMmKS7k5xnVYeu3A2diB1551UE+ISzUOLNxsrt27Uj4n09bQNdK8cxForXqUEt
ZiU7GcGNgxQQP82HFH7SoOQMtmNj4mwLXuYlAV86yMyyE8Pb1ycAhB8UcPERdTyd
xKTcqUlm1bt8qMBG9XYFE6MHQJeuy4Fy769X/fI6ORd+nPgUXuUwATaWXGbHyhhP
CKGJYwscOOEAfZtVWtXRLb3ZDYjbVajO9eOFGbz+r4qT4v4Jd/WYgTKre/L0sVXQ
DNefCwIDAQABo1YwVDAOBgNVHQ8BAf8EBAMCBaAwEwYDVR0lBAwwCgYIKwYBBQUH
AwIwDAYDVR0TAQH/BAIwADAfBgNVHSMEGDAWgBSmUYrGiilvG9RKiBscUqvfrhg1
0jANBgkqhkiG9w0BAQsFAAOCAQEArKxr8ElKX7aXYC/w6aPX83roDytCCnu0S0zV
nVMQg1K/RhQHeyNv6Hkrq2WHXhR8u/c7RL7Xl2x9sqhOZdwpdkRPW+DD4K8LS0b0
Rxptlkin8PQ9BFD8Ln49fvcaDoKOIaWgcdQU0TJAVhgTMCqYgFMMHPkr6wd6YQm2
w+8lVSB7+fxDJVXdyFV0WNvNIoRk3ABpa29UUfVhcQgdb63TEjh0RR4wMxn8aca5
sbhMpVfbOfTSJQMBFFDd0nCZrR7sqAin9A63trufKO2LLAtaEIx4LS2VgOoxnhua
BXzPjxHqyM4uXTjAYN2N8+l1YtAy+4U86g5D0FYyJRLNM1NU/g==
-----END CERTIFICATE-----
  • client-certificate-data 指定用户证书,同样是 base64 编码。
  • client-key-data 指定用户私钥,同样是 base64 编码。
bash 复制代码
# 或者使用以下命令获取不显示具体证书和key内容
[root@master30 ~]# kubectl config view
yaml 复制代码
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.1.8.30:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
多集群配置

示例:/root/multi-cluster-config

yaml 复制代码
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.1.8.30:6443
  name: cluster1
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.1.8.150:6443
  name: cluster2
contexts:
- context:
    cluster: cluster1
    namespace: default
    user: cluster1-admin
  name: cluster1-context
- context:
    cluster: cluster2
    namespace: default
    user: cluster2-admin
  name: cluster2-context
current-context: cluster1-context
kind: Config
preferences: {}
users:
- name: cluster1-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
- name: cluster2-admin
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED

配置文件定义了以下资源:

  • clusters:集群信息,包括集群名、authority、地址和端口。
  • users:用户信息,包括用户名、certificate和key。
  • contexts:上下文信息,包括上下文名称、集群名、用户名和namespace。

多集群切换

kubernetes 自带工具

环境准备:

bash 复制代码
[root@master30 ~]# cp .kube/config .kube/config.ori
[root@master30 ~]# cp multi-cluster-config .kube/config

查看集群清单

bash 复制代码
[root@master30 ~]# kubectl config get-clusters
NAME
cluster1
cluster2

查看上下文清单

bash 复制代码
[root@master30 ~]# kubectl config get-contexts
CURRENT   NAME               CLUSTER    AUTHINFO         NAMESPACE
*         cluster1-context   cluster1   cluster1-admin   default
          cluster2-context   cluster2   cluster2-admin   default

* 代表当前使用的上下文

切换上下文

bash 复制代码
[root@master30 ~]# kubectl config use-context cluster2-context
[root@master30 ~]# kubectl config get-contexts
CURRENT   NAME               CLUSTER    AUTHINFO         NAMESPACE
          cluster1-context   cluster1   cluster1-admin   default
*         cluster2-context   cluster2   cluster2-admin   default
kubectx-第三方工具

kubectx 安装

bash 复制代码
[root@master30 ~]# wget https://codeload.github.com/ahmetb/kubectx/zip/refs/heads/master -O kubectx.zip
[root@master30 ~]# unzip kubectx.zip
[root@master30 ~]# ls kubectx-master/
CONTRIBUTING.md  README.md  completion  go.sum  internal  kubens
LICENSE          cmd        go.mod      img     kubectx   test
[root@master30 ~]# cp kubectx-master/kubectx /usr/local/bin/
[root@master30 ~]# chmod +x /usr/local/bin/kubectx

# 配置补全
[root@master30 ~]# cp kubectx-master/completion/kubectx.bash /etc/bash_completion.d/
[root@master30 ~]# source /etc/bash_completion.d/kubectx.bash

kubectx 帮助

bash 复制代码
[root@master30 ~]# kubectx -h
USAGE:
  kubectx                       : list the contexts
  kubectx <NAME>                : switch to context <NAME>
  kubectx -                     : switch to the previous context
  kubectx -c, --current         : show the current context name
  kubectx <NEW_NAME>=<NAME>     : rename context <NAME> to <NEW_NAME>
  kubectx <NEW_NAME>=.          : rename current-context to <NEW_NAME>
  kubectx -d <NAME> [<NAME...>] : delete context <NAME> ('.' for current-context)
                                  (this command won't delete the user/cluster entry
                                  that is used by the context)
  kubectx -u, --unset           : unset the current context

  kubectx -h,--help             : show this message

kubectx 命令使用

bash 复制代码
[root@master30 ~]# kubectx
cluster1-context
cluster2-context
[root@master30 ~]# kubectx -c
cluster2-context

[root@master30 ~]# kubectx cluster1-context
Switched to context "cluster1-context".
[root@master30 ~]# kubectx -c
cluster1-context

[root@master30 ~]# kubectx -u
Unsetting current context.
Property "current-context" unset.
[root@master30 ~]# kubectx -c
error: current-context is not set

思考:所有对象都属于 Namespace 吗?

**答:**大多数Kubernetes资源(例如 pod、services、pvc等)都属于某个Namespace,但 Namespace 资源本身并不在 Namespace 中,更低级别资源(如Node和persistentVolumes)也不在任何Namespace中。

Pod Basic

环境准备

bash 复制代码
[root@master30 ~]# kubectl create ns pods
[root@master30 ~]# kubectl config set-context --current --namespace pods

Pod 介绍

pod 代表一个deployment单元:a single instance of an application in Kubernetes。

k8s 通过定义一个Pod的资源,然后在Pod里面运行容器,容器需要指定镜像,用来运行具体的服务。Pod代表集群上正在运行的一个进程,一个Pod封装一个容器(也可以封装多个容器),Pod里的容器共享存储、网络等。也就是说,应该把整个pod看作虚拟机,然后每个容器相当于运行在虚拟机的进程。

  • 运行单个容器的 Pod:将Pod看作是单个容器的包装器,kubernetes管理pods而不是直接管理容器。

  • 运行多个容器的 Pod :POD可以封装一个由多个共存容器组成的应用程序。pod中的容器会自动在集群中的同一物理机或虚拟机上运行。POD中多个容器共享资源和依赖项,彼此通信,以及协调何时以及如何终止它们。POD将这些容器、网络资源和存储资源作为一个单一的可管理实体包装在一起。pod中多个容器共享网络和存储资源。每个pod分配唯一的ip地址,pod中容器共享netns,包括ip地址和port端口。多个容器之间使用localhost通信。当pod中容器与其他pod通信,需要使用共享的网络资源。pod可以使用多个volume,pod中所有容器都可以访问这些卷。

白话解释: 可以把pod看成是一个"豌豆荚",里面有很多"豆子"(容器)。一个豌豆荚里的豆子,它们吸收着共同的营养成分、肥料、水分等,Pod和容器的关系也是一样,Pod里面的容器共享pod的网络、存储等。

例如:web容器提供文件共享,另外一个容器负责更新文件内容。

pod 基本管理

创建 pod

bash 复制代码
# 空运行,只生成yaml文件
[root@master30 ~]# kubectl run web --image=nginx --dry-run=client -o yaml
yaml 复制代码
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: web
  name: web
spec:
  containers:
  - image: nginx
    name: web
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
bash 复制代码
# 直接创建pod
[root@master30 ~]# kubectl run web --image=nginx
pod/web created

# 创建容器
[root@master30 ~]# kubectl get pod
NAME   READY   STATUS              RESTARTS   AGE
web    0/1     ContainerCreating   0          36s

# 正在运行
[root@master30 ~]# kubectl get pod
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          51s

[root@master30 ~]# kubectl get pod -o wide
NAME   READY   STATUS    RESTARTS   AGE   IP              NODE               NOMINATED NODE   READINESS GATES
web    1/1     Running   0          67s   10.224.51.131   worker31.laoma.cloud   <none>           <none>

[root@master30 ~]# ssh root@worker31 nerdctl ps --namespace k8s.io |grep web
[root@master30 ~]# ssh root@worker31 nerdctl images --namespace k8s.io |grep nginx

查看 pod

bash 复制代码
# 以yaml格式查看pod
[root@master30 ~]# kubectl get pod web -o yaml

# 查看 pod 状态信息
[root@master30 ~]# kubectl describe pod web
......
# 特别需要关注最下面的事件
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  27m   default-scheduler  Successfully assigned pods/web to worker31.laoma.cloud
  Normal  Pulling    27m   kubelet            Pulling image "nginx"
  Normal  Pulled     27m   kubelet            Successfully pulled image "nginx" in 296ms (296ms including waiting). Image size: 62960006 bytes.
  Normal  Created    27m   kubelet            Created container web
  Normal  Started    27m   kubelet            Started container web
# describe 是 pod 排故最常用的命令

# 查看pod标准输出,可以是哟个选项-f动态查看pod输出
[root@master30 ~]# kubectl logs -f web
# log 是 pod 排故最常用的命令

编辑 pod

bash 复制代码
[root@master30 ~]# kubectl edit pod web

例如:修改image为httpd。

提示:并不是所有属性都可以编辑,例如pod名称。如果非要编辑特定属性,可以先删除pod,然后修改pod对的yaml文件,重新创建。

pod 中执行命令

bash 复制代码
[root@master30 ~]# kubectl exec web -- hostname
web
[root@master30 ~]# kubectl exec web -- pwd
/
[root@master30 ~]# kubectl exec -it web -- bash -c 'echo Hello World > htdocs/index.html'
[root@master30 ~]# kubectl exec web -- id
uid=0(root) gid=0(root) groups=0(root)

cp 文件给 pod

bash 复制代码
[root@master30 ~]# kubectl cp /etc/hosts web:/new-hosts
[root@master30 ~]# kubectl exec web -- ls /new-hosts
/new-hosts

删除 pod

bash 复制代码
[root@master30 ~]# kubectl delete pod web
pod "web" deleted

删除时间

bash 复制代码
[root@master30 ~]# kubectl get pod web -o yaml|grep ' terminationGracePeriodSeconds'
terminationGracePeriodSeconds: 30

yaml 文件创建 pod

yaml 复制代码
# web.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: web
  name: web
spec:
  containers:
  - image: nginx
    name: web
bash 复制代码
[root@master30 ~]# kubectl create -f web.yaml
# 或者
[root@master30 ~]# kubectl apply -f web.yaml

综合实验:构建 wordpreess

  1. 创建 pod-mysql 数据库
  2. 创建 pod-wordpress 博客

参考步骤:

bash 复制代码
# 创建 pod-mysql 数据库
[root@master30 ~]# kubectl run wordpress-db --image=mysql --image-pull-policy IfNotPresent --env MYSQL_ROOT_PASSWORD=123

# 创建 pod-wordpress 博客
[root@master30 ~]# kubectl run wordpress-app --image=wordpress --image-pull-policy IfNotPresent

# 实现集群外访问集群内pod
[root@master30 ~]# kubectl port-forward pod/wordpress-app --address 10.1.8.30 8080:80
# 该命令窗口不要关闭

# 查看 IP
[root@master30 ~]# kubectl get pods -o wide | awk '{print $1" "$6}'
NAME IP
wordpress-app 10.224.113.135
wordpress-db 10.224.19.5

# 创建数据库
[root@master30 ~]# mysql -uroot -p123 -h 10.224.19.5
mysql> create database wordpress;
mysql> create user wordpress identified by '123';
mysql> grant all privileges on wordpress.* to wordpress;
mysql> flush privileges;
mysql> exit

访问web页面 http://10.1.8.30:8080,配置站点。

相关推荐
Sirius Wu2 小时前
Docker 镜像的构建、打包、变更、再次打包全流程
运维·docker·容器
私人珍藏库3 小时前
[Windows] 电子教鞭演示工具 PointerStick v7.11
windows·自动化·工具·软件·多功能
Zhu7583 小时前
【软件部署】docker环境部署domino
运维·docker·容器
xiaohuoji1294 小时前
震荡行情下的自动化交易:从架构视角看高抛低吸工具选型
架构·自动化·区块链
愚公搬代码6 小时前
【愚公系列】《OpenClaw实战指南》012-分析与展示:一句话生成可发给老板的报表与 PPT(Excel/WPS 表格自动化处理)
人工智能·自动化·powerpoint·excel·飞书·wps·openclaw
搞科研的小刘选手6 小时前
【武汉理工大学主办】第八届能源系统与电气电力国际学术会议(ICESEP 2026)
自动化·能源·电力·电气·电气工程·能源系统·电能
黑金IT6 小时前
AI自媒体自动化与Web Coding深度实战
人工智能·自动化·媒体
Dola_Zou6 小时前
授权管理如何重塑工业软件的商业版图
安全·自动化·软件工程·软件加密
AC赳赳老秦6 小时前
OpenClaw与Notion联动:自动同步工作任务、整理笔记,实现高效管理
运维·人工智能·python·数学建模·自动化·deepseek·openclaw