K8S集群层面监控

KubeStateMetrics简介

kube-state-metrics 是一个 Kubernetes 组件,它通过查询 Kubernetes 的 API 服务器,收集关于 Kubernetes 中各种资源(如节点、pod、服务等)的状态信息,并将这些信息转换成 Prometheus 可以使用的指标

kube-state-metrics 主要功能:

  • 节点状态信息,如节点 CPU 和内存的使用情况、节点状态、节点标签等
  • Pod 的状态信息,如 Pod 状态、容器状态、容器镜像信息、Pod 的标签和注释等
  • Deployment、Daemonset、Statefulset 和 ReplicaSet 等控制器的状态信息,如副本数、副本状态、创建时间等
  • Service 的状态信息,如服务类型、服务 IP 和端口等
  • 存储卷的状态信息,如存储卷类型、存储卷容量等
  • Kubernetes 的 API 服务器状态信息,如 API 服务器的状态、请求次数、响应时间等

通过 kube-state-metrics 可以方便的对 Kubernetes 集群进行监控,发现问题,以及提前预警

KubeStateMetrics

KubeStateMetrics部署

包含ServiceAccountClusterRoleClusterRoleBindingDeploymentConfigMapService 六类YAML文件

vim kubeStateMetrice.yaml

yaml 复制代码
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-state-metrics
  namespace: monitor
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: kube-state-metrics
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups: [""]
  resources:
  - configmaps
  - secrets
  - nodes
  - pods
  - services
  - resourcequotas
  - replicationcontrollers
  - limitranges
  - persistentvolumeclaims
  - persistentvolumes
  - namespaces
  - endpoints
  verbs: ["list", "watch"]
- apiGroups: ["apps"]
  resources:
  - statefulsets
  - daemonsets
  - deployments
  - replicasets
  verbs: ["list", "watch"]
- apiGroups: ["batch"]
  resources:
  - cronjobs
  - jobs
  verbs: ["list", "watch"]
- apiGroups: ["autoscaling"]
  resources:
  - horizontalpodautoscalers
  verbs: ["list", "watch"]
- apiGroups: ["networking.k8s.io", "extensions"]
  resources:
  - ingresses
  verbs: ["list", "watch"]
- apiGroups: ["storage.k8s.io"]
  resources:
  - storageclasses
  verbs: ["list", "watch"]
- apiGroups: ["certificates.k8s.io"]
  resources:
  - certificatesigningrequests
  verbs: ["list", "watch"]
- apiGroups: ["policy"]
  resources:
  - poddisruptionbudgets
  verbs: ["list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: kube-state-metrics-resizer
  namespace: monitor
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups: [""]
  resources:
  - pods
  verbs: ["get"]
- apiGroups: ["extensions","apps"]
  resources:
  - deployments
  resourceNames: ["kube-state-metrics"]
  verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-state-metrics
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-state-metrics
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitor
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kube-state-metrics
  namespace: monitor
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kube-state-metrics-resizer
subjects:
- kind: ServiceAccount
  name: kube-state-metrics
  namespace: monitor

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kube-state-metrics
  namespace: monitor
  labels:
    k8s-app: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    version: v1.3.0
spec:
  selector:
    matchLabels:
      k8s-app: kube-state-metrics
      version: v1.3.0
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kube-state-metrics
        version: v1.3.0
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: kube-state-metrics
      containers:
      - name: kube-state-metrics
        image: k8s-gcr.m.daocloud.io/k8s.gcr.io/kube-state-metrics/kube-state-metrics:v2.4.2
        ports:
        - name: http-metrics            ## 用于公开kubernetes的指标数据的端口
          containerPort: 8080
        - name: telemetry               ##用于公开自身kube-state-metrics的指标数据的端口
          containerPort: 8081
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
          initialDelaySeconds: 5
          timeoutSeconds: 5
      - name: addon-resizer         ##addon-resizer 用来伸缩部署在集群内的 metrics-server, kube-state-metrics等监控组件
        image: k8s.m.daocloud.io/addon-resizer:1.8.6
        resources:
          limits:
            cpu: 200m
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 30Mi
        env:
          - name: MY_POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
          - name: MY_POD_NAMESPACE
            valueFrom:
              fieldRef:
                fieldPath: metadata.namespace
        volumeMounts:
          - name: config-volume
            mountPath: /etc/config
        command:
          - /pod_nanny
          - --config-dir=/etc/config
          - --container=kube-state-metrics
          - --cpu=100m
          - --extra-cpu=1m
          - --memory=100Mi
          - --extra-memory=2Mi
          - --threshold=5
          - --deployment=kube-state-metrics
      volumes:
        - name: config-volume
          configMap:
            name: kube-state-metrics-config
---
# Config map for resource configuration.
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-state-metrics-config
  namespace: monitor
  labels:
    k8s-app: kube-state-metrics
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
data:
  NannyConfiguration: |-
    apiVersion: nannyconfig/v1alpha1
    kind: NannyConfiguration

---
apiVersion: v1
kind: Service
metadata:
  name: kube-state-metrics
  namespace: monitor
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "kube-state-metrics"
  annotations:
    prometheus.io/scrape: 'true'
spec:
  ports:
  - name: http-metrics
    port: 8080
    targetPort: http-metrics
    protocol: TCP
  - name: telemetry
    port: 8081
    targetPort: telemetry
    protocol: TCP
  selector:
    k8s-app: kube-state-metrics

验证

bash 复制代码
$ kubectl apply -f kubeStateMetrice.yaml
$ kubectl get all -nmonitor |grep kube-state-metrics
pod/kube-state-metrics-76cb99578d-4md9c   2/2     Running   0          3m37s
service/kube-state-metrics   ClusterIP   10.1.71.13   <none>        8080/TCP,8081/TCP   25m
deployment.apps/kube-state-metrics   1/1     1            1           25m
replicaset.apps/kube-state-metrics-56b4669995   0         0         0       25m
replicaset.apps/kube-state-metrics-745fddb869   0         0         0       4m45s
replicaset.apps/kube-state-metrics-76cb99578d   1         1         1       3m37s
$ curl -kL  $(kubectl get service -n monitor | grep kube-state-metrics |awk '{ print $3 }'):8080/metrics
kube_certificatesigningrequest_annotations Kubernetes annotations converted to Prometh
eus labels.
# TYPE kube_certificatesigningrequest_annotations gauge
# HELP kube_certificatesigningrequest_labels Kubernetes labels converted to Prometheus labels
.
# TYPE kube_certificatesigningrequest_labels gauge
# HELP kube_certificatesigningrequest_created Unix creation timestamp
# TYPE kube_certificatesigningrequest_created gauge
# HELP kube_certificatesigningrequest_condition The number of each certificatesigningrequest 
condition
# TYPE kube_certificatesigningrequest_condition gauge
# HELP kube_certificatesigningrequest_cert_length Length of the issued cert
# TYPE kube_certificatesigningrequest_cert_length gauge
# HELP kube_configmap_annotations Kubernetes annotations converted to Prometheus labels.

Kubernetes 集群架构监控

prometheus-cm.yamlscrape_configs一次添加如下采集数据

kube-apiserver

需要注意的是使用https访问时,需要tls相关配置,可以指定ca证书路径或者 insecure_skip_verify: true跳过证书验证。

除此之外,还要指定 bearer_token_file,否则会提示 server returned HTTP status 400 Bad Request;

yaml 复制代码
    - job_name: kube-apiserver
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name]
        action: keep
        regex: default;kubernetes
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: service
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

controller-manager

还要指定 bearer_token_file,否则会提示 server returned HTTP status 400 Bad Request;

查看controller-manager信息(名称可能不太一样)

bash 复制代码
$ kubectl describe pod -n kube-system kube-controller-manager-k8s-master
Name:                 kube-controller-manager-k8s-master
Namespace:            kube-system
......
Labels:               component=kube-controller-manager
                      tier=control-plane
......
Containers:
  kube-controller-manager:
    ......
    Command:
      kube-controller-manager
      --allocate-node-cidrs=true
      --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
      --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
      --bind-address=127.0.0.1
      ......

由上可知,匹配pod对象,lable标签为component=kube-controller-manager即可,但需注意的是controller-manager默认只运行127.0.0.1访问,因此还需先修改controller-manager配置

bash 复制代码
$ cat /etc/kubernetes/manifests/kube-controller-manager.yaml 
......
  - command:
    - --bind-address=0.0.0.0 # 端口改为0.0.0.0
    #- --port=0 # 注释0端口
......

编写prometheus配置文件,需要注意的是,他默认匹配到的是80端口,需要手动指定为10252端口

bash 复制代码
- job_name: kube-controller-manager
      kubernetes_sd_configs:
      - role: pod
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_label_component]
        regex: kube-controller-manager
        action: keep
      - source_labels: [__meta_kubernetes_pod_ip]
        regex: (.+)
        target_label: __address__
        replacement: ${1}:10257
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: service
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

scheduler

bash 复制代码
$ kubectl describe pod -n kube-system kube-scheduler-k8s-master
Name:                 kube-scheduler-k8s-master
Namespace:            kube-system
......
Labels:               component=kube-scheduler
                      tier=control-plane

由上可知,匹配pod对象,lable标签为component=kube-scheduler即可scheduler和controller-manager一样,默认监听0端口,需要注释

bash 复制代码
$ cat /etc/kubernetes/manifests/kube-scheduler.yaml
......
  - command:
    - --bind-address=0.0.0.0 # 端口改为0.0.0.0
    #- --port=0 # 注释0端口
......
  • 编写prometheus配置文件,需要注意的是,他默认匹配到的是80端口,需要手动指定为10251端口,同时指定token,否则会提示 server returned HTTP status 400 Bad Request;
bash 复制代码
- job_name: kube-scheduler
      kubernetes_sd_configs:
      - role: pod
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_label_component]
        regex: kube-scheduler
        action: keep
      - source_labels: [__meta_kubernetes_pod_ip]
        regex: (.+)
        target_label: __address__
        replacement: ${1}:10259
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: service
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

kube-state-metrics

编写prometheus配置文件,需要注意的是,他默认匹配到的是8080801两个端口,需要手动指定为8080端口;

yaml 复制代码
- job_name: kube-state-metrics
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_name]
        regex: kube-state-metrics
        action: keep
      - source_labels: [__meta_kubernetes_pod_ip]
        regex: (.+)
        target_label: __address__
        replacement: ${1}:8080
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: service
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

coredns

编写prometheus配置文件,需要注意的是,他默认匹配到的是53端口,需要手动指定为9153端口

bash 复制代码
- job_name: coredns
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels:
          - __meta_kubernetes_service_label_k8s_app
        regex: kube-dns
        action: keep
      - source_labels: [__meta_kubernetes_pod_ip]
        regex: (.+)
        target_label: __address__
        replacement: ${1}:9153
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: service
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

etcd

bash 复制代码
$ kubectl describe pod -n kube-system etcd-k8s-master
Name:                 etcd-master1
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 master1/192.10.192.158
Start Time:           Mon, 30 Jan 2023 15:06:35 +0800
Labels:               component=etcd
                      tier=control-plane
···
    Command:
      etcd
      --advertise-client-urls=https://192.10.192.158:2379
      --cert-file=/etc/kubernetes/pki/etcd/server.crt
      --client-cert-auth=true
      --data-dir=/var/lib/etcd
      --initial-advertise-peer-urls=https://192.10.192.158:2380
      --initial-cluster=master1=https://192.10.192.158:2380
      --key-file=/etc/kubernetes/pki/etcd/server.key
      --listen-client-urls=https://127.0.0.1:2379,https://192.10.192.158:2379
      --listen-metrics-urls=http://127.0.0.1:2381
      --listen-peer-urls=https://192.10.192.158:2380
      --name=master1
      --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
      --peer-client-cert-auth=true
      --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
      --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
      --snapshot-count=10000
      --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

由上可知,启动参数里面有一个 --listen-metrics-urls=http://127.0.0.1:2381 的配置,该参数就是来指定 Metrics 接口运行在 2381 端口下面的,而且是 http 的协议,所以也不需要什么证书配置,这就比以前的版本要简单许多了,以前的版本需要用 https 协议访问,所以要配置对应的证书。但是还需修改配置文件,地址改为0.0.0.0
编写prometheus配置文件,需要注意的是,他默认匹配到的是2379端口,需要手动指定为2381端口

bash 复制代码
    - job_name: etcd
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
          - __meta_kubernetes_pod_label_component
        regex: etcd
        action: keep
      - source_labels: [__meta_kubernetes_pod_ip]
        regex: (.+)
        target_label: __address__
        replacement: ${1}:2381
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

上面部分参数简介如下:

  • kubernetes_sd_configs: 设置发现模式为 Kubernetes 动态服务发现
  • kubernetes_sd_configs.role: 指定 Kubernetes 的服务发现模式,这里设置为 endpoints 的服务发现模式,该模式下会调用 kube-apiserver 中的接口获取指标数据。并且还限定只获取 kube-state-metrics 所在 - Namespace 的空间 kube-system 中的 Endpoints 信息
  • kubernetes_sd_configs.namespace: 指定只在配置的 Namespace 中进行 endpoints 服务发现
  • relabel_configs: 用于对采集的标签进行重新标记

热加载prometheus,使configmap配置文件生效(也可以等待prometheus的自动热加载)

bash 复制代码
$ curl -XPOST http://prometheus.kubernets.cn:30186/-/reload

cAdvisor

cAdvisor 主要功能:

  • 对容器资源的使用情况和性能进行监控。它以守护进程方式运行,用于收集、聚合、处理和导出正在运行容器的有关信息
  • cAdvisor 本身就对 Docker 容器支持,并且还对其它类型的容器尽可能的提供支持,力求兼容与适配所有类型的容器
  • Kubernetes 已经默认将其与 Kubelet 融合,所以我们无需再单独部署 cAdvisor 组件来暴露节点中容器运行的信息

Prometheus 添加 cAdvisor 配置

由于 Kubelet 中已经默认集成 cAdvisor 组件,所以无需部署该组件。需要注意的是,他的指标采集地址为 /metrics/cadvisor,需要配置https访问,可以设置 insecure_skip_verify: true 跳过证书验证

yaml 复制代码
 - job_name: kubelet
      metrics_path: /metrics/cadvisor
      scheme: https
      tls_config:
        insecure_skip_verify: true
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

热加载prometheus,使configmap配置文件生效

bash 复制代码
curl -XPOST http://prometheus.kubernets.cn/-/reload

node-exporter部署

Node Exporter 是 Prometheus 官方提供的一个节点资源采集组件,可以用于收集服务器节点的数据,如 CPU频率信息磁盘IO统计剩余可用内存等等

由于是针对所有K8S-node节点,所以我们这边使用DaemonSet这种方式

vim node-exporter.yaml

yaml 复制代码
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitor
  labels:
    name: node-exporter
spec:
  selector:
    matchLabels:
     name: node-exporter
  template:
    metadata:
      labels:
        name: node-exporter
    spec:
      hostPID: true
      hostIPC: true
      hostNetwork: true
      containers:
      - name: node-exporter
        image: prom/node-exporter:latest
        ports:
        - containerPort: 9100
        resources:
          requests:
            cpu: 0.15
        securityContext:
          privileged: true
        args:
        - --path.procfs
        - /host/proc
        - --path.sysfs
        - /host/sys
        - --collector.filesystem.ignored-mount-points
        - '"^/(sys|proc|dev|host|etc)($|/)"'
        volumeMounts:
        - name: dev
          mountPath: /host/dev
        - name: proc
          mountPath: /host/proc
        - name: sys
          mountPath: /host/sys
        - name: rootfs
          mountPath: /rootfs
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      volumes:
        - name: proc
          hostPath:
            path: /proc
        - name: dev
          hostPath:
            path: /dev
        - name: sys
          hostPath:
            path: /sys
        - name: rootfs
          hostPath:
            path: /

node_exporter.yaml文件说明:

  • hostPID:指定是否允许Node Exporter进程绑定到主机的PID命名空间。若值为true,则可以访问宿主机中的PID信息
  • hostIPC:指定是否允许Node Exporter进程绑定到主机的IPC命名空间。若值为true,则可以访问宿主机中的IPC信息
  • hostNetwork:指定是否允许Node Exporter进程绑定到主机的网络命名空间。若值为true,则可以访问宿主机中的网络信息

验证

bash 复制代码
$ curl localhost:9100/metrics |grep cpu
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0# HELP go_memstats_gc_cpu_fraction The fraction of this program's available CPU time used by the GC since theprogram started.
# TYPE go_memstats_gc_cpu_fraction gauge
go_memstats_gc_cpu_fraction 2.7853625597758774e-06
# HELP node_cpu_guest_seconds_total Seconds the CPUs spent in guests (VMs) for each mode.
# TYPE node_cpu_guest_seconds_total counter
node_cpu_guest_seconds_total{cpu="0",mode="nice"} 0
node_cpu_guest_seconds_total{cpu="0",mode="user"} 0
node_cpu_guest_seconds_total{cpu="1",mode="nice"} 0
node_cpu_guest_seconds_total{cpu="1",mode="user"} 0
# HELP node_cpu_seconds_total Seconds the CPUs spent in each mode.
# TYPE node_cpu_seconds_total counter
node_cpu_seconds_total{cpu="0",mode="idle"} 1.90640354e+06
node_cpu_seconds_total{cpu="0",mode="iowait"} 6915.35
node_cpu_seconds_total{cpu="0",mode="irq"} 0
node_cpu_seconds_total{cpu="0",mode="nice"} 1426.82
node_cpu_seconds_total{cpu="0",mode="softirq"} 14446.63

k8s-node 监控

prometheus-config.yaml 中新增采集 job:k8s-nodes

node_exporter也是每个node节点都运行,因此role使用node即可,默认address端口为10250,替换为9100即可

yaml 复制代码
- job_name: k8s-nodes
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - source_labels: [__meta_kubernetes_endpoints_name]
        action: replace
        target_label: endpoint
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: pod
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: namespace

热加载prometheus,使configmap配置文件生效:

bash 复制代码
$ curl -XPOST http://prometheus.kubernets.cn/-/reload

总结

  • kube-state-metrics:将 Kubernetes API 中的各种对象状态信息转化为 Prometheus 可以使用的监控指标数据。
  • cAdvisor:用于监视容器资源使用和性能的工具,它可以收集 CPU、内存、磁盘、网络和文件系统等方面的指标数据。
  • node-exporter:用于监控主机指标数据的收集器,它可以收集 CPU 负载、内存使用情况、磁盘空间、网络流量等各种指标数据

这三种工具可以协同工作,为用户提供一个全面的 Kubernetes 监控方案,帮助用户更好地了解其 Kubernetes 集群和容器化应用程序的运行情况

相关推荐
Eternal-Student14 分钟前
【docker 保存】将Docker镜像保存为一个离线的tar归档文件
运维·docker·容器
码农小丘22 分钟前
一篇保姆式centos/ubuntu安装docker
运维·docker·容器
灼烧的疯狂2 小时前
K8S + Jenkins 做CICD
容器·kubernetes·jenkins
wenyue11213 小时前
Revolutionize Your Kubernetes Experience with Easegress: Kubernetes Gateway API
容器·kubernetes·gateway
梅见十柒5 小时前
wsl2中kali linux下的docker使用教程(教程总结)
linux·经验分享·docker·云原生
Python私教6 小时前
ubuntu搭建k8s环境详细教程
linux·ubuntu·kubernetes
运维&陈同学6 小时前
【zookeeper01】消息队列与微服务之zookeeper工作原理
运维·分布式·微服务·zookeeper·云原生·架构·消息队列
O&REO7 小时前
单机部署kubernetes环境下Overleaf-基于MicroK8s的Overleaf应用部署指南
云原生·容器·kubernetes
politeboy7 小时前
k8s启动springboot容器的时候,显示找不到application.yml文件
java·spring boot·kubernetes
运维小文8 小时前
K8S资源限制之LimitRange
云原生·容器·kubernetes·k8s资源限制