【Prometheus】基于Altertmanager发送告警到多个接收方、监控各种服务、pushgateway

基于Altertmanager发送报警到多个接收方

一、配置alertmanager-发送告警到qq邮箱

1.1、告警流程

  1. Prometheus Server监控目标主机上暴露的http接口(这里假设接口A),通过Promethes配置的'scrape_interval'定义的时间间隔,定期采集目标主机上监控数据。
  2. 当接口A不可用的时候,Server端会持续的尝试从接口中取数据,直到"scrape_timeout"时间后停止尝试。这时候把接口的状态变为"DOWN"。
  3. Prometheus同时根据配置的"evaluation_interval"的时间间隔,定期(默认1min)的对Alert Rule进行评估;
    当到达评估周期的时候,发现接口A为DOWN,即UP=0为真,激活Alert,进入"PENDING"状态,并记录当前active的时间;
  4. 当下一个alert rule的评估周期到来的时候,发现UP=0继续为真,然后判断警报Active的时间是否已经超出rule里的'for' 持续时间,如果未超出,则进入下一个评估周期;如果时间超出,则alert的状态变为"FIRING";同时调用Alertmanager接口,发送相关报警数据。
  5. AlertManager收到报警数据后,会将警报信息进行分组,然后根据alertmanager配置的"group_wait"时间先进行等待。等wait时间过后再发送报警信息。
  6. 属于同一个Alert Group的警报,在等待的过程中可能进入新的alert,如果之前的报警已经成功发出,那么间隔"group_interval"的时间间隔后再重新发送报警信息。比如配置的是邮件报警,那么同属一个group的报警信息会汇总在一个邮件里进行发送。
  7. 如果Alert Group里的警报一直没发生变化并且已经成功发送,等待'repeat_interval'时间间隔之后再重复发送相同的报警邮件;如果之前的警报没有成功发送,则相当于触发第6条条件,则需要等待group_interval时间间隔后重复发送。

1.2、告警设置

报警:指prometheus将监测到的异常事件发送给alertmanager

通知:alertmanager将报警信息发送到邮件、微信、钉钉等

【1】邮箱配置

邮箱设置:

开启的时候就会出现授权码,授权码只出现一次,要保存好。

邮箱配置

yaml 复制代码
[root@master 3]# cat alertmanager-cm.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: alertmanager
  namespace: monitor-sa
data:
  alertmanager.yml: |-
    global:
      resolve_timeout: 1m
      smtp_smarthost: 'smtp.163.com:25' # 163邮箱的SMTP服务器地址+端口
      smtp_from: '15011572xxx@163.com'  # 这是指定从哪个邮箱发送报警 
      smtp_auth_username: '15011572xxx' # 发送邮箱的授权码
      smtp_auth_password: 'BGWHYUOSOOHWEUJM'  # 改成自己的
      smtp_require_tls: false
    route:  # 用于配置告警分发策略
      group_by: [alertname] # 采用哪个标签来作为分组依据
      group_wait: 10s		# 组告警等待时间。也就是告警产生后等待10s,如果有同组告警一起发出
      group_interval: 10s   # 上下两组发送告警的间隔时间
      repeat_interval: 10m  # 重复发送告警的时间,减少相同邮件的发送频率,默认是1h
      receiver: default-receiver  # 定义谁来收告警
    receivers:
    - name: 'default-receiver'
      email_configs:
      - to: '1011776350@qq.com'
        send_resolved: true
bash 复制代码
[root@master 3]# k apply -f alertmanager-cm.yaml
configmap/alertmanager created
[root@master 3]# k get cm -n monitor-sa
NAME                DATA   AGE
alertmanager        1      25s

【2】告警规则配置

yaml 复制代码
[root@master 3]# cat prometheus-alertmanager-cfg.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus-config
  namespace: monitor-sa
data:
  prometheus.yml: |
    rule_files:
    - /etc/prometheus/rules.yml
    alerting:
      alertmanagers:
      - static_configs:
        - targets: ["localhost:9093"]
    global:
      scrape_interval: 15s
      scrape_timeout: 10s
      evaluation_interval: 1m
    scrape_configs:
    - job_name: 'kubernetes-node'
      kubernetes_sd_configs:
      - role: node
      relabel_configs:
      - source_labels: [__address__]
        regex: '(.*):10250'
        replacement: '${1}:9100'
        target_label: __address__
        action: replace
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
    - job_name: 'kubernetes-node-cadvisor'
      kubernetes_sd_configs:
      - role:  node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor
    - job_name: 'kubernetes-apiserver'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https
    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - action: keep
        regex: true
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
      - action: replace
        regex: (.+)
        source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        target_label: __metrics_path__
      - action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        source_labels:
        - __address__
        - __meta_kubernetes_pod_annotation_prometheus_io_port
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - action: replace
        source_labels:
        - __meta_kubernetes_namespace
        target_label: kubernetes_namespace
      - action: replace
        source_labels:
        - __meta_kubernetes_pod_name
        target_label: kubernetes_pod_name
    - job_name: 'kubernetes-etcd'
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/ca.crt
        cert_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/server.crt
        key_file: /var/run/secrets/kubernetes.io/k8s-certs/etcd/server.key
      scrape_interval: 5s
      static_configs:
      - targets: ['10.32.1.147:2379']
  rules.yml: |
    groups:
    - name: example
      rules:
      - alert: apiserver的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-apiserver"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  apiserver的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-apiserver"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: etcd的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-etcd"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过80%"
      - alert:  etcd的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{job=~"kubernetes-etcd"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}组件的cpu使用率超过90%"
      - alert: kube-state-metrics的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metrics"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过80%"
          value: "{{ $value }}%"
          threshold: "80%"
      - alert: kube-state-metrics的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-state-metrics"}[1m]) * 100 > 0
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过90%"
          value: "{{ $value }}%"
          threshold: "90%"
      - alert: coredns的cpu使用率大于80%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-dns"}[1m]) * 100 > 80
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过80%"
          value: "{{ $value }}%"
          threshold: "80%"
      - alert: coredns的cpu使用率大于90%
        expr: rate(process_cpu_seconds_total{k8s_app=~"kube-dns"}[1m]) * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.k8s_app}}组件的cpu使用率超过90%"
          value: "{{ $value }}%"
          threshold: "90%"
      - alert: kube-proxy打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-kube-proxy"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kube-proxy打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-kube-proxy"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-schedule打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-schedule"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-schedule打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-schedule"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-controller-manager打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-controller-manager"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-controller-manager打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-controller-manager"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-apiserver打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-apiserver"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-apiserver打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-apiserver"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: kubernetes-etcd打开句柄数>600
        expr: process_open_fds{job=~"kubernetes-etcd"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>600"
          value: "{{ $value }}"
      - alert: kubernetes-etcd打开句柄数>1000
        expr: process_open_fds{job=~"kubernetes-etcd"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "{{$labels.instance}}的{{$labels.job}}打开句柄数>1000"
          value: "{{ $value }}"
      - alert: coredns
        expr: process_open_fds{k8s_app=~"kube-dns"}  > 600
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "插件{{$labels.k8s_app}}({{$labels.instance}}): 打开句柄数超过600"
          value: "{{ $value }}"
      - alert: coredns
        expr: process_open_fds{k8s_app=~"kube-dns"}  > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          description: "插件{{$labels.k8s_app}}({{$labels.instance}}): 打开句柄数超过1000"
          value: "{{ $value }}"
      - alert: kube-proxy
        expr: process_virtual_memory_bytes{job=~"kubernetes-kube-proxy"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: scheduler
        expr: process_virtual_memory_bytes{job=~"kubernetes-schedule"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kubernetes-controller-manager
        expr: process_virtual_memory_bytes{job=~"kubernetes-controller-manager"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kubernetes-apiserver
        expr: process_virtual_memory_bytes{job=~"kubernetes-apiserver"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kubernetes-etcd
        expr: process_virtual_memory_bytes{job=~"kubernetes-etcd"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: kube-dns
        expr: process_virtual_memory_bytes{k8s_app=~"kube-dns"}  > 2000000000
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "插件{{$labels.k8s_app}}({{$labels.instance}}): 使用虚拟内存超过2G"
          value: "{{ $value }}"
      - alert: HttpRequestsAvg
        expr: sum(rate(rest_client_requests_total{job=~"kubernetes-kube-proxy|kubernetes-kubelet|kubernetes-schedule|kubernetes-control-manager|kubernetes-apiservers"}[1m]))  > 1000
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): TPS超过1000"
          value: "{{ $value }}"
          threshold: "1000"
      - alert: Pod_restarts
        expr: kube_pod_container_status_restarts_total{namespace=~"kube-system|default|monitor-sa"} > 0
        for: 2s
        labels:
          severity: warnning
        annotations:
          description: "在{{$labels.namespace}}名称空间下发现{{$labels.pod}}这个pod下的容器{{$labels.container}}被重启,这个监控指标是由{{$labels.instance}}采集的"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Pod_waiting
        expr: kube_pod_container_status_waiting_reason{namespace=~"kube-system|default"} == 1
        for: 2s
        labels:
          team: admin
        annotations:
          description: "空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.pod}}下的{{$labels.container}}启动异常等待中"
          value: "{{ $value }}"
          threshold: "1"
      - alert: Pod_terminated
        expr: kube_pod_container_status_terminated_reason{namespace=~"kube-system|default|monitor-sa"} == 1
        for: 2s
        labels:
          team: admin
        annotations:
          description: "空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.pod}}下的{{$labels.container}}被删除"
          value: "{{ $value }}"
          threshold: "1"
      - alert: Etcd_leader
        expr: etcd_server_has_leader{job="kubernetes-etcd"} == 0
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 当前没有leader"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Etcd_leader_changes
        expr: rate(etcd_server_leader_changes_seen_total{job="kubernetes-etcd"}[1m]) > 0
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 当前leader已发生改变"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Etcd_failed
        expr: rate(etcd_server_proposals_failed_total{job="kubernetes-etcd"}[1m]) > 0
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}): 服务失败"
          value: "{{ $value }}"
          threshold: "0"
      - alert: Etcd_db_total_size
        expr: etcd_debugging_mvcc_db_total_size_in_bytes{job="kubernetes-etcd"} > 10000000000
        for: 2s
        labels:
          team: admin
        annotations:
          description: "组件{{$labels.job}}({{$labels.instance}}):db空间超过10G"
          value: "{{ $value }}"
          threshold: "10G"
      - alert: Endpoint_ready
        expr: kube_endpoint_address_not_ready{namespace=~"kube-system|default"} == 1
        for: 2s
        labels:
          team: admin
        annotations:
          description: "空间{{$labels.namespace}}({{$labels.instance}}): 发现{{$labels.endpoint}}不可用"
          value: "{{ $value }}"
          threshold: "1"
    - name: 物理节点状态-监控告警
      rules:
      - alert: 物理节点cpu使用率
        expr: 100-avg(irate(node_cpu_seconds_total{mode="idle"}[5m])) by(instance)*100 > 90
        for: 2s
        labels:
          severity: ccritical
        annotations:
          summary: "{{ $labels.instance }}cpu使用率过高"
          description: "{{ $labels.instance }}的cpu使用率超过90%,当前使用率[{{ $value }}],需要排查处理"
      - alert: 物理节点内存使用率
        expr: (node_memory_MemTotal_bytes - (node_memory_MemFree_bytes + node_memory_Buffers_bytes + node_memory_Cached_bytes)) / node_memory_MemTotal_bytes * 100 > 90
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{ $labels.instance }}内存使用率过高"
          description: "{{ $labels.instance }}的内存使用率超过90%,当前使用率[{{ $value }}],需要排查处理"
      - alert: InstanceDown
        expr: up == 0
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{ $labels.instance }}: 服务器宕机"
          description: "{{ $labels.instance }}: 服务器延时超过2分钟"
      - alert: 物理节点磁盘的IO性能
        expr: 100-(avg(irate(node_disk_io_time_seconds_total[1m])) by(instance)* 100) < 60
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 流入磁盘IO使用率过高!"
          description: "{{$labels.mountpoint }} 流入磁盘IO大于60%(目前使用:{{$value}})"
      - alert: 入网流量带宽
        expr: ((sum(rate (node_network_receive_bytes_total{device!~'tap.*|veth.*|br.*|docker.*|virbr*|lo*'}[5m])) by (instance)) / 100) > 102400
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 流入网络带宽过高!"
          description: "{{$labels.mountpoint }}流入网络带宽持续5分钟高于100M. RX带宽使用率{{$value}}"
      - alert: 出网流量带宽
        expr: ((sum(rate (node_network_transmit_bytes_total{device!~'tap.*|veth.*|br.*|docker.*|virbr*|lo*'}[5m])) by (instance)) / 100) > 102400
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 流出网络带宽过高!"
          description: "{{$labels.mountpoint }}流出网络带宽持续5分钟高于100M. RX带宽使用率{{$value}}"
      - alert: TCP会话
        expr: node_netstat_Tcp_CurrEstab > 1000
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} TCP_ESTABLISHED过高!"
          description: "{{$labels.mountpoint }} TCP_ESTABLISHED大于1000%(目前使用:{{$value}}%)"
      - alert: 磁盘容量
        expr: 100-(node_filesystem_free_bytes{fstype=~"ext4|xfs"}/node_filesystem_size_bytes {fstype=~"ext4|xfs"}*100) > 80
        for: 2s
        labels:
          severity: critical
        annotations:
          summary: "{{$labels.mountpoint}} 磁盘分区使用率过高!"
          description: "{{$labels.mountpoint }} 磁盘分区使用大于80%(目前使用:{{$value}}%)"
bash 复制代码
[root@master 3]# k apply -f  prometheus-alertmanager-cfg.yaml
[root@master 3]# k get cm -n monitor-sa
NAME                DATA   AGE
alertmanager        1      7h52m
kube-root-ca.crt    1      7d12h
prometheus-config   2      7d7h  # 已生成

【3】 部署prometheus

生成一个etcd-certs,这个在部署prometheus需要

bash 复制代码
[root@master 3]# kubectl -n monitor-sa create secret generic etcd-certs --from-file=/etc/kubernetes/pki/etcd/server.key  --from-file=/etc/kubernetes/pki/etcd/server.crt --from-file=/etc/kubernetes/pki/etcd/ca.crt
secret/etcd-certs created
yaml 复制代码
[root@node01 package]# ctr -n=k8s.io images import  alertmanager.tar.gz
[root@master 3]# cat prometheus-alertmanager-deploy.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-server
  namespace: monitor-sa
  labels:
    app: prometheus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
      component: server
    #matchExpressions:
    #- {key: app, operator: In, values: [prometheus]}
    #- {key: component, operator: In, values: [server]}
  template:
    metadata:
      labels:
        app: prometheus
        component: server
      annotations:
        prometheus.io/scrape: 'false'
    spec:
      nodeName: node01  # 换成自己节点的名字
      serviceAccountName: monitor
      containers:
      - name: prometheus
        image: prom/prometheus:v2.2.1
        imagePullPolicy: IfNotPresent
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        - "--web.enable-lifecycle"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: /etc/prometheus
          name: prometheus-config
        - mountPath: /prometheus/
          name: prometheus-storage-volume
        - name: k8s-certs
          mountPath: /var/run/secrets/kubernetes.io/k8s-certs/etcd/
      - name: alertmanager
        image: prom/alertmanager:v0.14.0
        imagePullPolicy: IfNotPresent
        args:
        - "--config.file=/etc/alertmanager/alertmanager.yml"
        - "--log.level=debug"
        ports:
        - containerPort: 9093
          protocol: TCP
          name: alertmanager
        volumeMounts:
        - name: alertmanager-config
          mountPath: /etc/alertmanager
        - name: alertmanager-storage
          mountPath: /alertmanager
        - name: localtime
          mountPath: /etc/localtime
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-config
        - name: prometheus-storage-volume
          hostPath:
           path: /data
           type: Directory
        - name: k8s-certs
          secret:
           secretName: etcd-certs
        - name: alertmanager-config
          configMap:
            name: alertmanager
        - name: alertmanager-storage
          hostPath:
           path: /data/alertmanager
           type: DirectoryOrCreate
        - name: localtime
          hostPath:
           path: /usr/share/zoneinfo/Asia/Shanghai

部署prometheus

bash 复制代码
[root@master 3]# kubectl apply -f prometheus-alertmanager-deploy.yaml
deployment.apps/prometheus-server configured
[root@master 3]# kubectl get pods -n monitor-sa -owide| grep prometheus
prometheus-server-cf55fc89b-mfk9z   2/2     Running   0          8h      10.244.196.151   node01   <none>           <none>

# 如果修改了这个配置文件,需要热加载一下
curl -X POST 10.244.196.151:9090/-/reload

【4】部署service

部署alertmanager的service,方便在浏览器访问

bash 复制代码
[root@master 3]# cat alertmanager-svc.yaml
---
apiVersion: v1
kind: Service
metadata:
  labels:
    name: prometheus
    kubernetes.io/cluster-service: 'true'
  name: alertmanager
  namespace: monitor-sa
spec:
  ports:
  - name: alertmanager
    nodePort: 30066
    port: 9093
    protocol: TCP
    targetPort: 9093
  selector:
    app: prometheus
  sessionAffinity: None
  type: NodePort
[root@master 3]# k apply -f alertmanager-svc.yaml
service/alertmanager created
[root@master 3]# kubectl get svc -n monitor-sa
NAME           TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
alertmanager   NodePort   10.99.35.142     <none>        9093:30066/TCP   18s
prometheus     NodePort   10.104.116.119   <none>        9090:31000/TCP   7d9h
# 注意:上面可以看到prometheus的service在物理机映射的端口是31000,
# alertmanager的service在物理机映射的端口是30066

浏览器访问:http://10.32.1.147:30066/#/alerts

访问prometheus的web界面

点击status->targets,可看到如下

点击Alerts,可看到如下

此时查看自己的邮箱,是会有告警邮件产生的。

二、配置alertmanager-发送告警到钉钉

三、配置alertmanager-发送告警到企业微信

3.1、注册企业微信

登陆网址:

https://work.weixin.qq.com/

找到应用管理,创建应用

应用名字wechat

创建成功之后显示如下:


3.2、修改alertmanager-cm.yaml

yaml 复制代码
kind: ConfigMap
apiVersion: v1
metadata:
  name: alertmanager
  namespace: monitor-sa
data:
  alertmanager.yml: |-
    global:
      resolve_timeout: 1m
      smtp_smarthost: 'smtp.163.com:25'
      smtp_from: '18874165690@163.com'
      smtp_auth_username: '18874165690'
      smtp_auth_password: 'WZCYMKKQMZRYFICZ'
      smtp_require_tls: false
    route:
      group_by: [alertname]
      group_wait: 10s
      group_interval: 10s
      repeat_interval: 10s
      receiver: prometheus
    receivers:
    - name: 'prometheus'
      wechat_configs:
      - corp_id: wwb510050c4fd9ef9c
        to_user: '@all'
        agent_id: 1000003
        api_secret: p8qOyWIIHPMqzYn-_MsCa8MHYilUn7b5TUjb22xrokU

参数说明:

  • secret: 企业微信("企业应用"-->"自定应用"[Prometheus]--> "Secret")
  • corp_id:企业信息("我的企业"--->"CorpID"[在底部])
  • agent_id: 企业微信("企业应用"-->"自定应用"[Prometheus]--> "AgentId")
  • wechat是自创建应用名称 #在这创建的应用名字是wechat,那么在配置route时,receiver也应该是Prometheus
  • to_user: '@all' :发送报警到所有人
bash 复制代码
[root@master 3]# k delete -f alertmanager-cm.yaml
[root@master 3]# k apply -f alertmanager-cm.yaml
[root@master 3]# k delete -f prometheus-alertmanager-deploy.yaml
[root@master 3]# k apply -f prometheus-alertmanager-deploy.yaml
[root@master 3]# k get pods -n monitor-sa
NAME                                READY   STATUS    RESTARTS   AGE
node-exporter-tjlfj                 1/1     Running   0          8d
node-exporter-v8fc5                 1/1     Running   0          8d
node-exporter-zxsch                 1/1     Running   0          8d
prometheus-server-cf55fc89b-f2xnd   2/2     Running   0          17s

后续可查看企微是否收到告警

四、Prometheus PromQL语法

4.1、数据类型

PromQL 表达式计算出来的值有以下几种类型:

  • 瞬时向量 (Instant vector): 一组时序,每个时序只有一个采样值
  • 区间向量 (Range vector): 一组时序,每个时序包含一段时间内的多个采样值
  • 标量数据 (Scalar): 一个浮点数
  • 字符串 (String): 一个字符串,暂时未用

【1】瞬时向量选择

用来选择一组时序在某个采样点的采样值

最简单的情况就是制定一个度量指标,选择出所有属于该度量指标的时序的当前采样值。

比如下面的表达式:apiserver_request_total

可以通过在后面添加大括号保卫起来的足以标签键值对来对时序进行过滤。

比如下面的表达式筛选出了job为 kubernetes-apiservers,并且 resource为 pod的时序:

匹配标签值时可以是等于,也可以使用正则表达式。总共有下面几种匹配操作符:
=:完全相等
!=: 不相等
=~: 正则表达式匹配
!~: 正则表达式不匹配

下面的表达式筛选出了container是kube-scheduler或kube-proxy或kube-apiserver的时序数据

container_processes{container=~"kube-scheduler|kube-proxy|kube-apiserver"}

【2】区间向量选择

类似于瞬时向量选择器,不同的是它选择的是过去一段时间的采样值。

可以通过在瞬时向量选择器后面添加包含在[]里的时长来得到区间向量选择器。

比如下面的表达式选出了所有度量指标为 apiserver_request_total且resource是pod的时序在过去1 分钟的采样值。

这个不支持Graph,需要选择Console,才会看到采集的数据。

说明:时长的单位可以是下面几种之一:
s:seconds
m:minutes
h:hours
d:days
w:weeks
y:years

【3】偏移向量选择器

前面介绍的选择器默认都是以当前时间为基准时间,偏移修饰器用来调整基准时间,使其往前偏移一段时间。

偏移修饰器紧跟在选择器后面,使用 offset 来指定要偏移的量。比如下面的表达式选择度量名称为apiserver_request_total的所有时序在 5 分钟前的采样值。

apiserver_request_total{job="kubernetes-apiserver",resource="pods"} offset 5m

下面的表达式选择apiserver_request_total 度量指标在 1 周前的这个时间点过去 5 分钟的采样值。

apiserver_request_total{job="kubernetes-apiserver",resource="pods"} [5m] offset 1w

【4】聚合操作符

PromQL 的聚合操作符用来将向量里的元素聚合得更少。总共有下面这些聚合操作符:

sum:求和
min:最小值
max:最大值
avg:平均值
stddev:标准差
stdvar:方差
count:元素个数
count_values:等于某值的元素个数
bottomk:最小的 k 个元素
topk:最大的 k 个元素
quantile:分位数

  • 计算master节点所有容器总计内存

    sum(container_memory_usage_bytes{instance=~"master"})/1024/1024/1024

  • 计算master节点最近1m所有容器cpu使用率

    sum (rate (container_cpu_usage_seconds_total{instance=~"xianchaomaster1"}[1m])) / sum (machine_cpu_cores{ instance =~"xianchaomaster1"}) * 100

  • 计算最近1m所有容器cpu使用率

    sum (rate (container_cpu_usage_seconds_total{id!="/"}[1m])) by (id)

    #把id会打印出来

    结果如下:

【5】函数

Prometheus 内置了一些函数来辅助计算,下面介绍一些典型的。

abs():绝对值
sqrt():平方根
exp():指数计算
ln():自然对数
ceil():向上取整
floor():向下取整
round():四舍五入取整
delta():计算区间向量里每一个时序第一个和最后一个的差值
sort():排序

五、Prometheus监控扩展

5.1、promethues采集tomcat监控数据

https://note.youdao.com/ynoteshare/index.html?id=0ddfc17eaf7bac94ad4497d7f5356213&type=note

5.2、promethues采集redis监控数据

https://note.youdao.com/ynoteshare/index.html?id=b9f87092ce8859cd583967677ea332df&type=note

5.3、Prometheus监控mysql

【1】安装mysql、mariadb

bash 复制代码
yum install mysql -y 
yum install mariadb  -y
# 上传课件里面的压缩包mysqld_exporter-0.10.0.linux-amd64.tar.gz,然后解压
tar -xvf mysqld_exporter-0.10.0.linux-amd64.tar.gz
cp -ar mysqld_exporter /usr/local/bin/
chmod +x /usr/local/bin/mysqld_exporter

【2】登陆mysql为mysql_exporter创建账号并授权

sql 复制代码
-- 创建数据库用户
mysql> CREATE USER 'mysql_exporter'@'localhost' IDENTIFIED BY 'Abcdef123!.';
Query OK, 0 rows affected (0.03 sec)
-- 对mysql_exporter用户授权
mysql> GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'mysql_exporter'@'localhost';
Query OK, 0 rows affected (0.00 sec)
-- exit 退出mysql

【3】创建mysql配置文件、运行时可免密码连接数据库

bash 复制代码
[root@node01 ~]# cat my.cnf
[client]
user=mysql_exporter
password=Abcdef123!.

【4】启动mysql_exporter客户端

bash 复制代码
[root@node01 ~]# nohup mysqld_exporter --config.my-cnf=./my.cnf &
# mysqld_exporter的监听端口是9104
[root@node01 ~]# lsof -i:9104
COMMAND    PID USER   FD   TYPE    DEVICE SIZE/OFF NODE NAME
mysqld_ex 2216 root    3u  IPv6 723617678      0t0  TCP *:peerwire (LISTEN)

【5】修改prometheus-alertmanager-cfg.yaml文件

yaml 复制代码
# 添加一个job name
- job_name: 'mysql'
      static_configs:
      - targets: ['10.32.1.148:9104']
bash 复制代码
[root@master 3]# vim prometheus-alertmanager-cfg.yaml
[root@master 3]# kubectl apply -f prometheus-alertmanager-cfg.yaml
configmap/prometheus-config configured
[root@master 3]# kubectl delete -f prometheus-alertmanager-deploy.yaml
deployment.apps "prometheus-server" deleted
[root@master 3]# kubectl apply -f prometheus-alertmanager-deploy.yaml
deployment.apps/prometheus-server created

访问prometheus查看:http://10.32.1.147:31000/targets

【6】grafana导入mysql监控图表

mysql-overview_rev5.json



5.4、Prometheus监控Nginx

https://note.youdao.com/ynoteshare/index.html?id=bea7b4b8f9a78db1679e1ac2ab747da5&type=note

5.5、prometheus监控mongodb

https://note.youdao.com/ynoteshare/index.html?id=39b54acb1fbc0199f966115ce9523bb6&type=note

六、Pushgateway

6.1、简介

Pushgateway是prometheus的一个组件,prometheus server默认是通过exporter主动获取数据(默认采取pull拉取数据),pushgateway则是通过被动方式推送数据到prometheus server,用户可以写一些自定义的监控脚本把需要监控的数据发送给pushgateway, 然后pushgateway再把数据发送给Prometheus server

6.2、优缺点

  • 优点
    Prometheus 默认采用定时pull 模式拉取targets数据,但是如果不在一个子网或者防火墙,prometheus就拉取不到targets数据,所以可以采用各个target往pushgateway上push数据,然后prometheus去pushgateway上定时pull数据
    在监控业务数据的时候,需要将不同数据汇总, 汇总之后的数据可以由pushgateway统一收集,然后由 Prometheus 统一拉取。
  • 缺点
    Prometheus拉取状态只针对 pushgateway, 不能对每个节点都有效;
    Pushgateway出现问题,整个采集到的数据都会出现问题
    监控下线,prometheus还会拉取到旧的监控数据,需要手动清理 pushgateway不要的数据。

6.3、实践

【1】部署pushgateway

安装pushgateway,在工作节点node01(10.32.1.148)操作

将课件中的安装包pushgateway.tar.gz上传到节点

bash 复制代码
[root@node01 package]# ll|grep push
-rw-r--r--  1 root      root         21320704 Feb 28 09:32 pushgateway.tar.gz
[root@node01 package]# docker load -i pushgateway.tar.gz
4d688dd2e2c4: Loading layer [==================================================>]  17.23MB/17.23MB
5447bffb5beb: Loading layer [==================================================>]  2.048kB/2.048kB
Loaded image: prom/pushgateway:latest
You have mail in /var/spool/mail/root
[root@node01 package]# lsof -i:9091
[root@node01 package]# docker run -d --name pushgateway -p 9091:9091 prom/pushgateway
ff87ff2c6f088a322f63e086883ca475602f13f30f209ded861c5530144b7a93

浏览器访问 http://10.32.1.148:9091/,出现如下ui界面

修改prometheus-alertmanager-cfg.yaml配置文件:

yaml 复制代码
 - job_name: 'pushgateway'
      scrape_interval: 5s
      static_configs:
      - targets: ['10.32.1.148:9091']
        honor_labels: true
bash 复制代码
[root@master 3]# vim prometheus-alertmanager-cfg.yaml
[root@master 3]# k apply -f prometheus-alertmanager-cfg.yaml
configmap/prometheus-config configured
[root@master 3]# k delete -f prometheus-alertmanager-deploy.yaml
deployment.apps "prometheus-server" deleted
[root@master 3]# k apply -f prometheus-alertmanager-deploy.yaml
deployment.apps/prometheus-server created

在prometheus的targets列表可以看到pushgateway

可以查看pushgateway相关指标

【2】推送简单数据

推送指定的数据格式到pushgateway

bash 复制代码
# 向 {job="test_job"} 添加单条数据:
[root@node01 package]# echo " metric 3.6" | curl --data-binary @- http://10.32.1.148:9091/metrics/job/test_job
You have mail in /var/spool/mail/root

# 注:--data-binary 表示发送二进制数据,注意:它是使用POST方式发送的!

刷新http://10.32.1.148:9091/

【3】推送复杂数据

bash 复制代码
[root@node01 package]# cat <<EOF | curl --data-binary @- http://10.32.1.148:9091/metrics/job/test_job/instance/test_instance
> #TYPE node_memory_usage gauge
> node_memory_usage 36
> # TYPE memory_total gauge
> node_memory_total 36000
> EOF
You have mail in /var/spool/mail/root



【4】删除某个组下某个实例的所有数据

【5】删除某个组下的所有数据:

【6】把数据上报到pushgateway

在被监控服务所在的机器配置数据上报,想要把10.32.1.147这个机器的内存数据上报到pushgateway,下面步骤需要在10.32.1.147操作

bash 复制代码
[root@master 3]# cat push.sh
node_memory_usages=$(free -m | grep Mem | awk '{print $3/$2*100}')
job_name="memory"
instance_name="10.32.1.149"
cat <<EOF | curl --data-binary @- http://10.32.1.148:9091/metrics/job/$job_name/instance/$instance_name
#TYPE node_memory_usages  gauge
node_memory_usages $node_memory_usages
EOF
[root@master 3]# sh push.sh

打开pushgateway web ui界面,可看到如下:

打开prometheus ui界面,可看到如下node_memory_usages的metrics指标

设置计划任务,定时上报数据

bash 复制代码
chmod +x push.sh
crontab -e
*/1 * * * * /usr/bin/bash  /root/CKA/model3/3/push.sh

查看指标数据,是有变化的

注意:

从上面配置可以看到,我们上传到pushgateway中的数据有job也有instance,而prometheus配置pushgateway这个job_name中也有job和instance,这个job和instance是指pushgateway实例本身,添加 honor_labels: true 参数, 可以避免promethues的targets列表中的job_name是pushgateway的 job 、instance 和上报到pushgateway数据的job和instance冲突。

相关推荐
菜鸟挣扎史2 天前
grafana+prometheus+windows_exporter实现windows进程资源占用的监控
windows·grafana·prometheus·进程·process
牙牙7054 天前
Prometheus结合K8s(二)使用
容器·kubernetes·prometheus
牙牙7055 天前
Prometheus结合K8s(一)搭建
容器·kubernetes·prometheus
福大大架构师每日一题5 天前
32.2 prometheus倒排索引统计功能
ios·iphone·prometheus
让生命变得有价值7 天前
使用 Grafana api 查询 Datasource 数据
grafana·prometheus
福大大架构师每日一题7 天前
31.3 XOR压缩和相关的prometheus源码解读
prometheus
赫萝的红苹果7 天前
Springboot整合Prometheus+grafana实现系统监控
spring boot·grafana·prometheus
Heartsuit7 天前
云原生之运维监控实践-使用Prometheus与Grafana实现对Nginx和Nacos服务的监测
nginx·云原生·nacos·grafana·prometheus·运维监控
Heartsuit7 天前
云原生之运维监控实践-使用Telegraf、Prometheus与Grafana实现对InfluxDB服务的监测
云原生·grafana·prometheus·influxdb·telegraf·运维监控
武子康12 天前
大数据-218 Prometheus 插件 exporter 与 pushgateway 配置使用 监控服务 使用场景
大数据·hive·hadoop·flink·spark·prometheus