K8S常用命令

常用命令

Kubernetes(K8s)是一个开源系统,用于自动部署、扩展和管理容器化应用程序。在使用Kubernetes时,你会经常使用kubectl命令行工具来与Kubernetes集群交互。以下是一些常用的kubectl命令:

  1. 获取资源信息

    • kubectl get nodes - 列出所有节点。
    • kubectl get pods - 列出当前命名空间的所有Pods。
    • kubectl get deployments - 列出当前命名空间的所有部署。
    • kubectl get services - 列出当前命名空间的所有服务。
    • kubectl get namespaces - 列出集群的所有命名空间。
  2. 创建和删除资源

    • kubectl create -f <file.yaml> - 通过文件创建资源。
    • kubectl delete -f <file.yaml> - 通过文件删除资源。
    • kubectl delete pods, services -l <label> - 删除具有特定标签的所有Pods和Services。
  3. 描述和检查资源

    • kubectl describe nodes <node-name> - 显示一个节点的详细信息。
    • kubectl describe pods <pod-name> - 显示一个Pod的详细信息。
    • kubectl logs <pod-name> - 显示Pod的日志。
    • kubectl exec -it <pod-name> -- <command> - 在Pod中执行命令。
  4. 资源编辑和更新

    • kubectl edit <resource-type>/<name> - 编辑资源并应用更改。
    • kubectl apply -f <file.yaml> - 应用文件中的配置更改。
  5. 资源扩缩容

    • kubectl scale deployment <deployment-name> --replicas=<num-replicas> - 调整部署的副本数量。
  6. 资源标签和注解

    • kubectl label pods <pod-name> <label-key>=<label-value> - 给Pod添加一个新的标签。
    • kubectl annotate pods <pod-name> <annotation-key>=<annotation-value> - 给Pod添加一个新的注解。
  7. 端口转发和代理

    • kubectl port-forward <pod-name> <local-port>:<pod-port> - 将本地端口转发到Pod端口。
    • kubectl proxy - 运行一个Kubernetes API服务器代理。
  8. 配置上下文和集群信息

    • kubectl config view - 显示kubectl的配置。
    • kubectl config current-context - 显示当前的上下文。
    • kubectl config set-context <context-name> - 设置当前的上下文。
  9. 资源回滚

    • kubectl rollout undo deployment/<deployment-name> - 回滚到上一个版本的部署。
  10. 查看资源状态和事件

    • kubectl get events - 查看集群事件。
    • kubectl rollout status deployment/<deployment-name> - 查看部署的状态。

请注意,上述命令可能需要根据你的Kubernetes集群和命名空间进行适当的调整。例如,如果你想要在特定的命名空间中获取资源信息,你需要添加-n <namespace>到命令中。

实际执行显示

  1. 显示node信息
    kubectl get node
    kubectl get nodes -o wide
cpp 复制代码
root@ab-P10S-WS:/home/ab# kubectl get nodes -o wide
NAME              STATUS     ROLES           AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION                    CONTAINER-RUNTIME
bitmain-p10s-ws   Ready      control-plane   19d   v1.25.4   172.28.9.241   <none>        Ubuntu 20.04.6 LTS   5.15.0-92-generic                 containerd://1.6.26
se9-6t            Ready      <none>          19d   v1.25.4   172.28.9.40    <none>        Ubuntu 20.04 LTS     5.10.4-tag--00042-g04fcbe819955   containerd://1.6.26
sophon            NotReady   <none>          19d   v1.25.4   172.28.9.130   <none>        Ubuntu 20.04 LTS     5.10.4-tag--00042-g04fcbe819955   containerd://1.6.26
root@bitmain-P10S-WS:/home/bitmain# kubectl get node
NAME              STATUS     ROLES           AGE   VERSION
bitmain-p10s-ws   Ready      control-plane   19d   v1.25.4
se9-6t            Ready      <none>          19d   v1.25.4
sophon            NotReady   <none>  
  1. 显示pod信息
    kubectl get pods -A
    kubectl get pods -A -o wide
cpp 复制代码
root@ab-P10S-WS:/home/ab# kubectl get pods -A
NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE
kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d
kube-flannel   kube-flannel-ds-prbnz                     1/1     Running            0          19d
kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d
kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ErrImagePull       0          19d
kube-system    bitmain-tpu-plugin-pjnvz                  0/1     ImagePullBackOff   0          19d
kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d
kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d
kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d
kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d
kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d
kube-system    kube-proxy-5d496                          1/1     Running            0          19d
kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d
kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d
kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d
kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d
kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d
kube-system    nginx-t-5dc4b58b8-zfw4c                   1/1     Terminating        0          19d
root@bitmain-P10S-WS:/home/bitmain# kubectl get pods -A -o wide
NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-flannel   kube-flannel-ds-prbnz                     1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ErrImagePull       0          19d   10.244.2.5     se9-6t            <none>           <none>
kube-system    bitmain-tpu-plugin-pjnvz                  0/1     ImagePullBackOff   0          19d   10.244.1.5     sophon            <none>           <none>
kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-5d496                          1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
kube-system    nginx-t-5dc4b58b8-zfw4c                   1/1     Terminating        0          19d   10.244.1.3     sophon            <none>           <none>
root@ab-P10S-WS:/home/ab#
  1. 显示指定namespace的pod信息

kubectl get pods -n kube-flannel

kubectl get pods -n kube-flannel -o wide

cpp 复制代码
root@ab-P10S-WS:/home/ab# kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-m795j   1/1     Running   0          19d
kube-flannel-ds-prbnz   1/1     Running   0          19d
kube-flannel-ds-zvtfk   1/1     Running   0          19d
root@bitmain-P10S-WS:/home/bitmain# kubectl get pods -n kube-flannel -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-flannel-ds-m795j   1/1     Running   0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-flannel-ds-prbnz   1/1     Running   0          19d   172.28.9.130   sophon            <none>           <none>
kube-flannel-ds-zvtfk   1/1     Running   0          19d   172.28.9.40    se9-6t            <none>           <none>
  1. 显示node详细信息
    kubectl describe nodes se9-6t
cpp 复制代码
root@bitmain-P10S-WS:/home/bitmain# kubectl describe nodes se9-6t
Name:               se9-6t
Roles:              <none>
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=se9-6t
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"16:7b:8f:9e:63:06"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 172.28.9.40
                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 31 Jan 2024 09:42:42 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  se9-6t
  AcquireTime:     <unset>
  RenewTime:       Mon, 19 Feb 2024 17:24:41 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Wed, 31 Jan 2024 09:44:27 +0800   Wed, 31 Jan 2024 09:44:27 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:42:42 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:42:42 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:42:42 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 19 Feb 2024 17:21:17 +0800   Wed, 31 Jan 2024 09:44:35 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.28.9.40
  Hostname:    se9-6t
Capacity:
  cpu:                     6
  ephemeral-storage:       9079836Ki
  hugepages-1Gi:           0
  hugepages-2Mi:           0
  hugepages-32Mi:          0
  hugepages-64Ki:          0
  memory:                  1212680Ki
  pods:                    110
  tpu.bitmain.com/bm1688:  0
Allocatable:
  cpu:                     6
  ephemeral-storage:       8367976844
  hugepages-1Gi:           0
  hugepages-2Mi:           0
  hugepages-32Mi:          0
  hugepages-64Ki:          0
  memory:                  1110280Ki
  pods:                    110
  tpu.bitmain.com/bm1688:  0
System Info:
  Machine ID:                 1eef0b5f42d64d108c592bddfbc61a20
  System UUID:                1eef0b5f42d64d108c592bddfbc61a20
  Boot ID:                    26141532-3000-45c2-a5cb-d0a4d96b41e5
  Kernel Version:             5.10.4-tag--00042-g04fcbe819955
  OS Image:                   Ubuntu 20.04 LTS
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://1.6.26
  Kubelet Version:            v1.25.4
  Kube-Proxy Version:         v1.25.4
PodCIDR:                      10.244.2.0/24
PodCIDRs:                     10.244.2.0/24
Non-terminated Pods:          (4 in total)
  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
  kube-flannel                kube-flannel-ds-zvtfk       100m (1%)     100m (1%)   50Mi (4%)        50Mi (4%)      19d
  kube-system                 bitmain-tpu-plugin-mgxvm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
  kube-system                 kube-proxy-jnjkp            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
  kube-system                 nginx-t-5dc4b58b8-qtxcp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource                Requests   Limits
  --------                --------   ------
  cpu                     100m (1%)  100m (1%)
  memory                  50Mi (4%)  50Mi (4%)
  ephemeral-storage       0 (0%)     0 (0%)
  hugepages-1Gi           0 (0%)     0 (0%)
  hugepages-2Mi           0 (0%)     0 (0%)
  hugepages-32Mi          0 (0%)     0 (0%)
  hugepages-64Ki          0 (0%)     0 (0%)
  tpu.bitmain.com/bm1688  1          1
Events:                   <none>
root@ab-P10S-WS:/home/ab#
  1. 显示pod的详细信息
    kubectl describe pods kube-flannel-ds-zvtfk -n kube-flannel
cpp 复制代码
root@ab-P10S-WS:/home/ab# kubectl describe pods kube-flannel-ds-zvtfk -n kube-flannel
Name:                 kube-flannel-ds-zvtfk
Namespace:            kube-flannel
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      flannel
Node:                 se9-6t/172.28.9.40
Start Time:           Wed, 31 Jan 2024 09:42:51 +0800
Labels:               app=flannel
                      controller-revision-hash=7f4d65bc74
                      pod-template-generation=1
                      tier=node
Annotations:          <none>
Status:               Running
IP:                   172.28.9.40
IPs:
  IP:           172.28.9.40
Controlled By:  DaemonSet/kube-flannel-ds
Init Containers:
  install-cni-plugin:
    Container ID:  containerd://f588bfdd8eb6255562c398e3da95337238f126425b1bea0a42f236e639237424
    Image:         docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
    Image ID:      docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /flannel
      /opt/cni/bin/flannel
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 31 Jan 2024 09:43:39 +0800
      Finished:     Wed, 31 Jan 2024 09:43:39 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /opt/cni/bin from cni-plugin (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkttm (ro)
  install-cni:
    Container ID:  containerd://1cb07980a3f30aac58fca774323edb7e2aacf84554b213332834af4dbc4ef7f9
    Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
    Image ID:      docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
    Port:          <none>
    Host Port:     <none>
    Command:
      cp
    Args:
      -f
      /etc/kube-flannel/cni-conf.json
      /etc/cni/net.d/10-flannel.conflist
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Wed, 31 Jan 2024 09:44:22 +0800
      Finished:     Wed, 31 Jan 2024 09:44:22 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /etc/cni/net.d from cni (rw)
      /etc/kube-flannel/ from flannel-cfg (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkttm (ro)
Containers:
  kube-flannel:
    Container ID:  containerd://7005b97fe8473c2469a7b4b97cf219673e0ef8b26855879b4e2f2e24722aef97
    Image:         docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
    Image ID:      docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
    Port:          <none>
    Host Port:     <none>
    Command:
      /opt/bin/flanneld
    Args:
      --ip-masq
      --kube-subnet-mgr
    State:          Running
      Started:      Wed, 31 Jan 2024 09:44:24 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:     100m
      memory:  50Mi
    Environment:
      POD_NAME:           kube-flannel-ds-zvtfk (v1:metadata.name)
      POD_NAMESPACE:      kube-flannel (v1:metadata.namespace)
      EVENT_QUEUE_DEPTH:  5000
    Mounts:
      /etc/kube-flannel/ from flannel-cfg (rw)
      /run/flannel from run (rw)
      /run/xtables.lock from xtables-lock (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qkttm (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  run:
    Type:          HostPath (bare host directory volume)
    Path:          /run/flannel
    HostPathType:
  cni-plugin:
    Type:          HostPath (bare host directory volume)
    Path:          /opt/cni/bin
    HostPathType:
  cni:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cni/net.d
    HostPathType:
  flannel-cfg:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      kube-flannel-cfg
    Optional:  false
  xtables-lock:
    Type:          HostPath (bare host directory volume)
    Path:          /run/xtables.lock
    HostPathType:  FileOrCreate
  kube-api-access-qkttm:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 :NoSchedule op=Exists
                             node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>
root@ab-P10S-WS:/home/ab#
  1. 删除node
    kubectl delete node sophon
    此时kubectl get node -A 中已经没有,但kubectl get pods -A -o wide中还有,需要执行:
    kubectl delete pod nginx-t-5dc4b58b8-zfw4c --grace-period=0 --force -n kube-system
    此时有提示:
    Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
    Error from server (NotFound): pods "nginx-t-5dc4b58b8-zfw4c" not found
    再kubectl get pods -A -o wide 此时kubectl get pods -A -o wide中已经没有sophon节点的信息了
cpp 复制代码
root@ab-P10S-WS:/home/ab# kubectl get node
NAME              STATUS     ROLES           AGE   VERSION
bitmain-p10s-ws   Ready      control-plane   19d   v1.25.4
se9-6t            Ready      <none>          19d   v1.25.4
sophon            NotReady   <none>          19d   v1.25.4
root@ab-P10S-WS:/home/ab# kubectl delete node sophon
node "sophon" deleted
root@ab-P10S-WS:/home/ab# kubectl get node
NAME              STATUS   ROLES           AGE   VERSION
bitmain-p10s-ws   Ready    control-plane   19d   v1.25.4
se9-6t            Ready    <none>          19d   v1.25.4
root@ab-P10S-WS:/home/ab# kubectl get pods -A -o wide
NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-flannel   kube-flannel-ds-prbnz                     1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ImagePullBackOff   0          19d   10.244.2.5     se9-6t            <none>           <none>
kube-system    bitmain-tpu-plugin-pjnvz                  0/1     ImagePullBackOff   0          19d   10.244.1.5     sophon            <none>           <none>
kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-5d496                          1/1     Running            0          19d   172.28.9.130   sophon            <none>           <none>
kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
kube-system    nginx-t-5dc4b58b8-zfw4c                   1/1     Terminating        0          19d   10.244.1.3     sophon            <none>           <none>
root@ab-P10S-WS:/home/ab# kubectl delete pod <pod-name> --grace-period=0 --force -n <namespace>
bash: syntax error near unexpected token `newline'
root@ab-P10S-WS:/home/ab# kubectl delete pod nginx-t-5dc4b58b8-zfw4c --grace-period=0 --force -n kube-system
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
Error from server (NotFound): pods "nginx-t-5dc4b58b8-zfw4c" not found
root@ab-P10S-WS:/home/ab# kubectl get pods -A -o wide
NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ImagePullBackOff   0          19d   10.244.2.5     se9-6t            <none>           <none>
kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
root@ab-P10S-WS:/home/ab#
  1. 进入node容器
    kubectl exec nginx-t-5dc4b58b8-qtxcp -n kube-system -it -- "bash"
cpp 复制代码
root@ab-P10S-WS:/home/ab# kubectl get pods -A -o wide
NAMESPACE      NAME                                      READY   STATUS             RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-flannel   kube-flannel-ds-m795j                     1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-flannel   kube-flannel-ds-zvtfk                     1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    bitmain-tpu-plugin-mgxvm                  0/1     ImagePullBackOff   0          19d   10.244.2.5     se9-6t            <none>           <none>
kube-system    coredns-565d847f94-8h7zw                  1/1     Running            0          19d   10.244.0.2     bitmain-p10s-ws   <none>           <none>
kube-system    coredns-565d847f94-s4rsm                  1/1     Running            0          19d   10.244.0.3     bitmain-p10s-ws   <none>           <none>
kube-system    etcd-bitmain-p10s-ws                      1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-apiserver-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-controller-manager-bitmain-p10s-ws   1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-fzc5p                          1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    kube-proxy-jnjkp                          1/1     Running            0          19d   172.28.9.40    se9-6t            <none>           <none>
kube-system    kube-scheduler-bitmain-p10s-ws            1/1     Running            0          19d   172.28.9.241   bitmain-p10s-ws   <none>           <none>
kube-system    nginx-t-5dc4b58b8-fjfsx                   0/1     Pending            0          16d   <none>         <none>            <none>           <none>
kube-system    nginx-t-5dc4b58b8-qtxcp                   1/1     Running            0          19d   10.244.2.3     se9-6t            <none>           <none>
root@ab-P10S-WS:/home/ab# kubectl exec  nginx-t-5dc4b58b8-qtxcp -n kube-system -it -- "bash"
root@nginx-t-5dc4b58b8-qtxcp:/# ls /dev
bm-tpu0    fd    ion     null  pts     shm        soph-dpu  soph-ive  soph-stitch  soph-tde0  soph-vpss    soph_vc_enc  stdin   termination-log  urandom
bmdev-ctl  full  mqueue  ptmx  random  soph-base  soph-dwa  soph-ldc  soph-sys     soph-tde1  soph_vc_dec  stderr       stdout  tty              zero
root@nginx-t-5dc4b58b8-qtxcp:/#

查看k8s服务

在使用systemctl来查看Kubernetes服务时,你通常是在检查Kubernetes集群的控制平面组件和节点组件的状态。这些组件可能是以系统服务的形式运行的,尤其是在使用kubeadm或其他类似工具部署的集群中。

下面是一些常用的systemctl命令来查看和管理Kubernetes服务:

查看kubelet服务状态:

systemctl status kubelet

kubelet是在每个节点上运行的主要"节点代理",它负责维护和管理该节点上的容器。

查看所有Kubernetes相关服务:

systemctl list-units --type=service | grep kube

这将列出所有名称中包含"kube"的服务。

启动/停止/重启kubelet服务:

systemctl start kubelet

systemctl stop kubelet

systemctl restart kubelet

这些命令分别用于启动、停止和重启kubelet服务。

查看服务的日志:

journalctl -u kubelet

使用journalctl命令可以查看kubelet服务的日志。你可以替换kubelet为其他服务的名称,如docker、containerd或etcd等,以查看这些服务的日志。

启用/禁用服务自启动:

systemctl enable kubelet

systemctl disable kubelet

这些命令分别用于设置kubelet服务在系统启动时自动启动或禁用自启动。

请注意,Kubernetes的控制平面组件(如API服务器、控制器管理器、调度器等)可能不是作为系统服务运行的。在某些安装中,这些组件可能是作为容器运行的,特别是在使用kubeadm等工具时。在这种情况下,你不会看到这些组件作为单独的服务出现在systemctl的输出中。如果你的控制平面组件是作为Pod运行的,你应该使用kubectl来管理和检查它们的状态,而不是systemctl。

相关推荐
转身後 默落1 小时前
11.Docker 之分布式仓库 Harbor
分布式·docker·容器
菩提云1 小时前
Deepseek存算分离安全部署手册
人工智能·深度学习·安全·docker·容器
努力的小T12 小时前
使用 Docker 部署 Apache Spark 集群教程
linux·运维·服务器·docker·容器·spark·云计算
东风微鸣13 小时前
TTRSS 迁移实战
docker·云原生·kubernetes·可观察性
转身後 默落15 小时前
04.Docker 镜像命令
docker·容器·eureka
IT_张三15 小时前
Docker+Kubernetes_第一章_Docker入门
java·docker·kubernetes
企鹅侠客16 小时前
kube-proxy怎么修改ipvs规则?
云原生·kubernetes·kubelet
仇辉攻防17 小时前
【云安全】云原生- K8S 污点横移
web安全·网络安全·云原生·容器·kubernetes·k8s·安全威胁分析
人工干智能19 小时前
科普:“docker”与“docker compose”
运维·docker·容器
神马都会亿点点的毛毛张19 小时前
【Docker教程】万字长文详解Docker命令
java·运维·后端·docker·容器