基于Docker 搭建 Prometheus & Grafana 环境

Prometheus & Grafana

容器安装

1.创建项目目录和配置文件

bash 复制代码
mkdir prometheus-grafana-stack
cd prometheus-grafana-stack

2.编写 Prometheus 配置文件

配置文件 prometheus.yml

配置 node_exporter、mysql_exporter... 需要被监控的节点

yaml 复制代码
# prometheus/prometheus.yml
global:
  scrape_interval: 15s  # 默认 15 秒采集一次

rule_files:
  - "alert.rules.yml"

scrape_configs:
  - job_name: "prometheus"
    static_configs:
      - targets: ["prometheus:9090"]
# Node Exporter
  - job_name: "node_exporter"
    static_configs:
      - targets: ["node_exporter:9100"]
        labels:
          role: "web"
          env: "localhost"
      - targets: ["192.168.31.120:9100"]
        labels:
          role: "web"
          env: "192.168.31.120"
# Node Exporter MySQL
  - job_name: 'mysql'
    static_configs:
      - targets: ['127.0.0.1:9104']
        labels:
          role: "db"
          env: "localhost"
# Node Exporter Windows
  - job_name: 'windows'
    static_configs:
      - targets: ['192.168.34.18:9182']


# Blackbox Exporter
  - job_name: 'blackbox'
    metrics_path: /probe
    params:
      module: [http_2xx]  # 默认探测 HTTP 200
    static_configs:
      - targets:
          - http://192.168.34.18:8080/healthz  # 你要监控的服务
          - https://www.baidu.com
    relabel_configs:
      - source_labels: [__address__]
        target_label: __param_target
      - source_labels: [__param_target]
        target_label: instance
      - target_label: __address__
        replacement: blackbox:9115

alerting:
  alertmanagers:
    - static_configs:
        - targets: ['alertmanager:9093']

3.编写 Docker Compose 文件

docker-compose 安装 yaml 文件

prometheus、grafana、node_exporter, 若需要新增 exporter,请自行添加

yaml 复制代码
version: "3.8"

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - ./prometheus/alert.rules.yml:/etc/prometheus/alert.rules.yml   # 告警规则
    command:
      - "--config.file=/etc/prometheus/prometheus.yml"
      - "--web.enable-lifecycle"
    ports:
      - "9090:9090"
    depends_on:
      - blackbox
      - alertmanager
    networks:
      - monitoring

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    ports:
      - "3000:3000"
    volumes:
      - grafana-data:/var/lib/grafana
    depends_on:
      - prometheus
    networks:
      - monitoring

  node_exporter:
    image: prom/node-exporter:latest
    container_name: node_exporter
    ports:
      - "9100:9100"
    networks:
      - monitoring

  blackbox:
    image: prom/blackbox-exporter:latest
    container_name: blackbox-exporter
    ports:
      - "9115:9115"
    networks:
      - monitoring

  alertmanager:
    image: prom/alertmanager:latest
    container_name: alertmanager
    volumes:
      - ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml
    command:
      - "--config.file=/etc/alertmanager/alertmanager.yml"
    ports:
      - "9093:9093"
    networks:
      - monitoring

volumes:
  grafana-data:

networks:
  monitoring:
    driver: bridge
Prometheus 告警规则(./prometheus/alert.rules.yml)
bash 复制代码
groups:
  - name: blackbox-alerts
    rules:
      - alert: ApiServerDown
        expr: probe_success{instance="http://192.168.34.18:8080/healthz"} == 0
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "API Server 不可达"
          description: "api-server 服务 1 分钟无法访问"
Alertmanager 配置(./alertmanager/alertmanager.yml)

示例邮件通知(你也可以改成钉钉/企业微信):

bash 复制代码
global:
  smtp_smarthost: 'smtp.example.com:587'
  smtp_from: 'alert@example.com'
  smtp_auth_username: 'alert@example.com'
  smtp_auth_password: 'your_password'

route:
  receiver: 'email'

receivers:
  - name: 'email'
    email_configs:
      - to: 'ops@example.com'

4.启动容器

bash 复制代码
cd monitoring
docker-compose -f docker-compose up -d

5.配置 Grafana

  • 访问 Grafana:在浏览器中访问 http://<你的主机IP>:3000。默认的用户名和密码是 admin/admin。登录后会提示你修改密码。
  • 添加 Prometheus 数据源:
    • 登录 Grafana 后,点击左侧菜单栏的"齿轮"图标 -> "Data Sources"。
    • 点击"Add data source"并选择"Prometheus"。
    • 在 URL 字段中输入 http://prometheus:9090
    • 注意:这里不能使用 localhost 或主机的 IP 地址,因为 Grafana 容器需要通过 Docker 的内部网络来访问 Prometheus 容器。prometheus 就是在 docker-compose.yml 中定义的服务名,它在 Docker 网络中可以被直接解析。
    • 点击"Save & Test"按钮。如果出现"Data source is working"的提示,说明连接成功。
  • 导入仪表盘:
    • 现在你可以导入现有的仪表盘模板。例如,Node Exporter 的官方仪表盘 ID 是 1860。
    • 点击左侧菜单栏的"仪表盘"图标 -> "Import"。
    • 在"Import via grafana.com"输入框中输入 1860,然后点击"Load"。
    • 选择你刚刚配置的 Prometheus 数据源,点击"Import"。

Node Exporter

bash 复制代码
cd /usr/local/src
wget https://github.com/prometheus/node_exporter/releases/download/v1.8.2/node_exporter-1.8.2.linux-amd64.tar.gz

tar -xvf node_exporter-1.8.2.linux-amd64.tar.gz
cd node_exporter-1.8.2.linux-amd64


sudo mv node_exporter /usr/local/bin/

/usr/local/bin/node_exporter --version
  • 创建 systemd 服务

新建服务文件 /etc/systemd/system/node_exporter.service:
node_exporter.service 服务配置

bash 复制代码
[Unit]
Description=Prometheus Node Exporter
After=network.target

[Service]
User=nobody
ExecStart=/usr/local/bin/node_exporter
Restart=on-failure

[Install]
WantedBy=multi-user.target
  • 启动服务
bash 复制代码
sudo systemctl daemon-reload
sudo systemctl enable --now node_exporter
  • 验证
bash 复制代码
systemctl status node_exporter
# 默认会监听 9100 端口,在浏览器访问:
http://<server_ip>:9100/metrics

导入仪表盘

Grafana → Dashboards → Import → 选择这个文件即可

Dashboard 内容

Network In (Mbps) → 每秒接收流量

Network Out (Mbps) → 每秒发送流量

Packets In/Out (pps) → 每秒收发包数

Network Errors → 接收/发送错误数

Network Drops → 接收/发送丢包数

Disk Total Space (GB) → 总空间

Disk Available Space (GB) → 可用空间(非 root 用户可用)

ROOT Disk Available Space (GB) → 空闲空间(包括 root 用户保留)
网络仪表盘 network-dashboard.json

Grafana → Dashboards → Import → 选择这个文件即可

json 复制代码
{
  "id": null,
  "title": "Network Monitoring",
  "tags": ["network", "node_exporter"],
  "timezone": "browser",
  "schemaVersion": 36,
  "version": 1,
  "panels": [
	{
	  "type": "timeseries",
	  "title": "CPU Usage (%)",
	  "targets": [
		{
		  "expr": "100 - (avg by(instance) (rate(node_cpu_seconds_total{mode=\"idle\"}[5m])) * 100)",
		  "legendFormat": "{{instance}}",
		  "refId": "A"
		}
	  ],
	  "fieldConfig": {
		"defaults": {
		  "unit": "percent",
		  "min": 0,
		  "max": 100
		}
	  },
	  "gridPos": { "h": 8, "w": 12, "x": 0, "y": 32 }
	},
	{
	  "type": "timeseries",
	  "title": "CPU Usage by Mode (%)",
	  "targets": [
		{
		  "expr": "avg by(instance, mode) (rate(node_cpu_seconds_total[5m])) * 100",
		  "legendFormat": "{{instance}} - {{mode}}",
		  "refId": "A"
		}
	  ],
	  "fieldConfig": {
		"defaults": {
		  "unit": "percent",
		  "min": 0,
		  "max": 100
		}
	  },
	  "gridPos": { "h": 8, "w": 12, "x": 12, "y": 32 }
	},
    {
      "type": "timeseries",
      "title": "Network In (Mbps)",
      "targets": [
        {
          "expr": "(rate(node_network_receive_bytes_total{device!=\"lo\"}[5m]) * 8) / 1024 / 1024",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "A"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "Mbps"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 0, "y": 0 }
    },
    {
      "type": "timeseries",
      "title": "Network Out (Mbps)",
      "targets": [
        {
          "expr": "(rate(node_network_transmit_bytes_total{device!=\"lo\"}[5m]) * 8) / 1024 / 1024",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "A"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "Mbps"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 12, "y": 0 }
    },
    {
      "type": "timeseries",
      "title": "Packets In (packets/s)",
      "targets": [
        {
          "expr": "rate(node_network_receive_packets_total{device!=\"lo\"}[5m])",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "A"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "pps"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 0, "y": 8 }
    },
    {
      "type": "timeseries",
      "title": "Packets Out (packets/s)",
      "targets": [
        {
          "expr": "rate(node_network_transmit_packets_total{device!=\"lo\"}[5m])",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "A"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "pps"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 12, "y": 8 }
    },
    {
      "type": "timeseries",
      "title": "Network Errors (per second)",
      "targets": [
        {
          "expr": "rate(node_network_receive_errs_total{device!=\"lo\"}[5m])",
          "legendFormat": "{{instance}} - {{device}} - RX errors",
          "refId": "A"
        },
        {
          "expr": "rate(node_network_transmit_errs_total{device!=\"lo\"}[5m])",
          "legendFormat": "{{instance}} - {{device}} - TX errors",
          "refId": "B"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "eps"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 0, "y": 16 }
    },
    {
      "type": "timeseries",
      "title": "Network Drops (per second)",
      "targets": [
        {
          "expr": "rate(node_network_receive_drop_total{device!=\"lo\"}[5m])",
          "legendFormat": "{{instance}} - {{device}} - RX drops",
          "refId": "A"
        },
        {
          "expr": "rate(node_network_transmit_drop_total{device!=\"lo\"}[5m])",
          "legendFormat": "{{instance}} - {{device}} - TX drops",
          "refId": "B"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "eps"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 12, "y": 16 }
    },
	    {
      "type": "timeseries",
      "title": "Disk Total Space (GB)",
      "targets": [
        {
          "expr": "node_filesystem_size_bytes{fstype!=\"tmpfs\",fstype!=\"overlay\"} / 1024 / 1024 / 1024",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "A"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "decgbytes"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 0, "y": 24 }
    },
    {
      "type": "timeseries",
      "title": "Disk Available Space (GB)",
      "targets": [
        {
          "expr": "node_filesystem_avail_bytes{fstype!=\"tmpfs\",fstype!=\"overlay\"} / 1024 / 1024 / 1024",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "A"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "decgbytes"
        }
      },
      "gridPos": { "h": 8, "w": 12, "x": 12, "y": 24 }
    },
    {
	  "type": "timeseries",
	  "title": "Memory Usage (%)",
	  "targets": [
		{
		  "expr": "(1 - (node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes)) * 100",
		  "legendFormat": "{{instance}}",
		  "refId": "A"
		}
	  ],
	  "fieldConfig": {
		"defaults": {
		  "unit": "percent",
		  "min": 0,
		  "max": 100
		}
	  },
	  "gridPos": { "h": 8, "w": 12, "x": 0, "y": 24 }
	},
	{
	  "type": "timeseries",
	  "title": "Memory Available (GB)",
	  "targets": [
		{
		  "expr": "node_memory_MemAvailable_bytes / 1024 / 1024 / 1024",
		  "legendFormat": "{{instance}}",
		  "refId": "A"
		}
	  ],
	  "fieldConfig": {
		"defaults": {
		  "unit": "GB"
		}
	  },
	  "gridPos": { "h": 8, "w": 12, "x": 12, "y": 24 }
	}
  ]
}

mysqld_exporter

  • 下载二进制
bash 复制代码
cd /usr/local/src
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.15.1/mysqld_exporter-0.15.1.linux-amd64.tar.gz
tar -xvf mysqld_exporter-0.15.1.linux-amd64.tar.gz
mv mysqld_exporter-0.15.1.linux-amd64 /usr/local/mysqld_exporter
  • 创建 MySQL 监控用户
sql 复制代码
CREATE USER 'exporter'@'%' IDENTIFIED BY 'your_password' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'%';
FLUSH PRIVILEGES;
  • 配置环境变量
bash 复制代码
cat > /usr/local/mysqld_exporter/.mysqld_exporter.cnf <<EOF
[client]
user=exporter
password=your_password
host=127.0.0.1
port=3306
EOF
chmod 600 /usr/local/mysqld_exporter/.mysqld_exporter.cnf

然后设置环境变量,让 exporter 知道配置文件位置:

bash 复制代码
export DATA_SOURCE_NAME="exporter:your_password@(127.0.0.1:3306)/"
  • 启动 mysqld_exporter
bash 复制代码
/usr/local/mysqld_exporter/mysqld_exporter --config.my-cnf=/usr/local/mysqld_exporter/.mysqld_exporter.cnf

默认会监听在 0.0.0.0:9104,Prometheus 可以通过:

bash 复制代码
http://your-server-ip:9104/metrics
  • 配置 systemd 自启动

创建文件 /etc/systemd/system/mysqld_exporter.service:

bash 复制代码
[Unit]
Description=Prometheus MySQL Exporter
After=network.target

[Service]
User=nobody  #note: 创建一个 nobody 用户,并加入nobody组,这样 mysqld_exporter 就可以以 nobody 用户启动了
Group=nobody
ExecStart=/usr/local/mysqld_exporter/mysqld_exporter \
  --config.my-cnf=/usr/local/mysqld_exporter/.mysqld_exporter.cnf
Restart=always

[Install]
WantedBy=multi-user.target

然后启用并启动:

bash 复制代码
systemctl daemon-reload
systemctl enable mysqld_exporter
systemctl start mysqld_exporter
  • Prometheus 配置
bash 复制代码
scrape_configs:
  - job_name: 'mysql'
    static_configs:
      - targets: ['your-server-ip:9104']
  • MySQL 监控 Dashboard JSON

mysql dashboard

连接数(当前/最大)、QPS / TPS、慢查询数、InnoDB Buffer Pool 使用率、表锁 / 行锁等待、线程运行情况、磁盘读写(InnoDB I/O)

json 复制代码
{
  "id": null,
  "title": "MySQL 8.0 Monitoring",
  "tags": ["mysql", "mysqld_exporter", "MySQL8"],
  "timezone": "browser",
  "schemaVersion": 36,
  "version": 1,
  "panels": [
    {
      "type": "timeseries",
      "title": "Connections",
      "targets": [
        {
          "expr": "mysql_global_status_threads_connected",
          "legendFormat": "Threads Connected",
          "refId": "A"
        },
        {
          "expr": "mysql_global_variables_max_connections",
          "legendFormat": "Max Connections",
          "refId": "B"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "short" } },
      "gridPos": { "h": 6, "w": 12, "x": 0, "y": 0 }
    },
    {
      "type": "timeseries",
      "title": "QPS (Queries per Second)",
      "targets": [
        {
          "expr": "rate(mysql_global_status_questions[5m])",
          "legendFormat": "QPS",
          "refId": "A"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "qps" } },
      "gridPos": { "h": 6, "w": 12, "x": 12, "y": 0 }
    },
    {
      "type": "timeseries",
      "title": "TPS (Transactions per Second)",
      "targets": [
        {
          "expr": "sum(rate(mysql_global_status_commands_total{command=~'commit|rollback'}[5m]))",
          "legendFormat": "TPS",
          "refId": "A"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "tps" } },
      "gridPos": { "h": 6, "w": 12, "x": 0, "y": 6 }
    },
    {
      "type": "timeseries",
      "title": "Slow Queries",
      "targets": [
        {
          "expr": "rate(mysql_global_status_slow_queries[5m])",
          "legendFormat": "Slow Queries",
          "refId": "A"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "ops" } },
      "gridPos": { "h": 6, "w": 12, "x": 12, "y": 6 }
    },
    {
      "type": "timeseries",
      "title": "InnoDB Buffer Pool Usage",
      "targets": [
        {
          "expr": "(mysql_global_status_innodb_buffer_pool_bytes_data / mysql_global_status_innodb_buffer_pool_bytes_total) or 0",
          "legendFormat": "Buffer Pool Usage",
          "refId": "A"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "percent" } },
      "gridPos": { "h": 6, "w": 12, "x": 0, "y": 12 }
    },
    {
      "type": "timeseries",
      "title": "Locks (Table/Row)",
      "targets": [
        {
          "expr": "rate(mysql_global_status_table_locks_waited[5m])",
          "legendFormat": "Table Locks Waited",
          "refId": "A"
        },
        {
          "expr": "rate(mysql_global_status_innodb_row_lock_time[5m])",
          "legendFormat": "Row Lock Time",
          "refId": "B"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "ms" } },
      "gridPos": { "h": 6, "w": 12, "x": 12, "y": 12 }
    },
    {
      "type": "timeseries",
      "title": "Threads",
      "targets": [
        {
          "expr": "mysql_global_status_threads_running",
          "legendFormat": "Threads Running",
          "refId": "A"
        },
        {
          "expr": "mysql_global_status_threads_connected",
          "legendFormat": "Threads Connected",
          "refId": "B"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "short" } },
      "gridPos": { "h": 6, "w": 12, "x": 0, "y": 18 }
    },
    {
      "type": "timeseries",
      "title": "InnoDB I/O (Reads/Writes)",
      "targets": [
        {
          "expr": "rate(mysql_global_status_innodb_data_reads[5m])",
          "legendFormat": "Reads",
          "refId": "A"
        },
        {
          "expr": "rate(mysql_global_status_innodb_data_writes[5m])",
          "legendFormat": "Writes",
          "refId": "B"
        }
      ],
      "fieldConfig": { "defaults": { "unit": "ops" } },
      "gridPos": { "h": 6, "w": 12, "x": 12, "y": 18 }
    }
  ]
}

QPS 验证

创建文件 qps.sh 执行后查看监控QPS变化

qps.sh

bash 复制代码
#!/bin/bash

# MySQL 连接参数(替换为你的实际配置)
MYSQL_HOST="192.168.31.100"
MYSQL_PORT="3308"
MYSQL_USER="expo"
MYSQL_PWD="expo2025"
MYSQL_DB="test_slow"

# 要执行的 SQL(简单查询,避免影响数据库)
SQL_QUERY="SELECT 1;"

# 循环次数(0 表示无限循环,按 Ctrl+C 终止)
LOOP_COUNT=0
current_count=0

echo "开始持续连接 MySQL,执行查询...(按 Ctrl+C 停止)"

# 持续连接并执行查询
while true; do
    # 连接 MySQL 并执行查询(--silent 减少输出,仅显示结果)
    mysql -h $MYSQL_HOST -P $MYSQL_PORT -u $MYSQL_USER -p$MYSQL_PWD $MYSQL_DB --silent -e "$SQL_QUERY"

    # 检查命令执行结果
    if [ $? -eq 0 ]; then
        current_count=$((current_count + 1))
        echo "第 $current_count 次查询成功"
    else
        echo "第 $current_count 次查询失败"
    fi

    # 如果设置了循环次数,达到后退出
    if [ $LOOP_COUNT -ne 0 ] && [ $current_count -ge $LOOP_COUNT ]; then
        break
    fi

    # 可选:控制频率(如每秒 1 次,去掉则尽可能快地执行)
    # sleep 1
done

echo "执行结束,共成功 $current_count 次"

redis_exporter

安装 redis_exporter(Docker 示例)

bash 复制代码
docker run -d --name redis_exporter \
  -p 9121:9121 \
  -e REDIS_ADDR=redis://<redis_host>:6379 \
  oliver006/redis_exporter

Prometheus 配置抓取:

yaml 复制代码
scrape_configs:
  - job_name: 'redis'
    static_configs:
      - targets: ['<redis_host>:9121']

指标示例:

redis_up → Redis 是否存活

redis_memory_used_bytes → 内存使用量

redis_connected_clients → 连接数

redis_slowlog_length → 慢查询数

kafka_exporter

bash 复制代码
docker run -d --name kafka_exporter \
  -p 9308:9308 \
  -e KAFKA_SERVER=<kafka_host>:9092 \
  danielqsj/kafka_exporter

Prometheus 配置抓取:

yaml 复制代码
scrape_configs:
  - job_name: 'kafka'
    static_configs:
      - targets: ['<kafka_host>:9308']

指标示例:

kafka_topic_partition_current_offset → 当前 offset

kafka_consumergroup_lag → 消费延迟

kafka_server_brokertopicmetrics_messages_in_total → 消息吞吐量

也可以用 JMX Exporter

Kafka 也支持通过 JMX Exporter 暴露 JVM 内部指标,Prometheus 抓取 JMX Exporter 的 HTTP 端口即可。

windows_exporter

Windows Server 监控(磁盘、CPU、内存)

  1. 下载 windows_exporter 最新版本
  2. 安装为服务:
bash 复制代码
# 实例仅供参考:
.\windows_exporter-0.31.3-amd64.exe --collectors.enabled cpu,logical_disk,memory

默认端口 9182

Prometheus 配置抓取:

yaml 复制代码
scrape_configs:
  - job_name: 'windows'
    static_configs:
      - targets: ['<windows_host>:9182']

指标示例:

windows_logical_disk_free_bytes → 可用磁盘空间

windows_logical_disk_size_bytes → 总磁盘空间

windows_cpu_time_total → CPU 使用率

windows_memory_available_bytes → 可用内存

windows_exporter.json

监控CPU 使用率(平均)、内存使用率(总、可用)、磁盘使用率(总、可用)

json 复制代码
{
  "annotations": {
    "list": []
  },
  "editable": true,
  "gnetId": null,
  "graphTooltip": 0,
  "id": null,
  "links": [],
  "panels": [
    {
	  "type": "timeseries",
	  "gridPos": { "h": 10, "w": 24, "x": 0, "y": 0 },
	  "title": "Windows CPU Usage",
	  "targets": [
		{
		  "expr": "100 - (avg by(instance) (rate(windows_cpu_time_total{mode=\"idle\"}[5m])) * 100)",
		  "legendFormat": "{{instance}}",
		  "refId": "A"
		}
	  ],
	  "fieldConfig": {
		"defaults": {
		  "unit": "percent"
		}
	  }
	},
	{
	  "type": "timeseries",
	  "gridPos": { "h": 10, "w": 24, "x": 0, "y": 10 },
	  "title": "Windows Memory Usage",
	  "targets": [
		{
		  "expr": "windows_memory_physical_total_bytes - windows_memory_physical_free_bytes",
		  "legendFormat": "Used - {{instance}}",
		  "refId": "B1"
		},
		{
		  "expr": "windows_memory_physical_free_bytes",
		  "legendFormat": "Free - {{instance}}",
		  "refId": "B2"
		}
	  ],
	  "fieldConfig": {
		"defaults": {
		  "unit": "bytes"
		}
	  }
	},
    {
      "type": "timeseries",
	  "gridPos": { "h": 10, "w": 24, "x": 0, "y": 10 },
      "title": "Windows Logical Disk Usage",
      "targets": [
        {
          "expr": "(windows_logical_disk_size_bytes - windows_logical_disk_free_bytes) / 1024 / 1024 / 1024",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "C"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "gigabytes"
        }
      }
    },
    {
      "type": "timeseries",
	  "gridPos": { "h": 10, "w": 24, "x": 0, "y": 10 },
      "title": "Windows Logical Disk Free Space",
      "targets": [
        {
          "expr": "windows_logical_disk_free_bytes / 1024 / 1024 / 1024",
          "legendFormat": "{{instance}} - {{device}}",
          "refId": "D"
        }
      ],
      "fieldConfig": {
        "defaults": {
          "unit": "gigabytes"
        }
      }
    }
  ],
  "schemaVersion": 36,
  "style": "dark",
  "tags": ["windows", "cpu", "memory", "disk"],
  "templating": {
    "list": []
  },
  "time": {
    "from": "now-1h",
    "to": "now"
  },
  "timepicker": {},
  "timezone": "browser",
  "title": "Windows Server Monitoring",
  "version": 1
}
  1. 配置任务计划程序实现开机启动
  • 按下Win+R,输入taskschd.msc打开任务计划程序
  • 在右侧 "操作" 栏中点击 "创建基本任务"
  • 按照向导设置:
    • 名称:输入如 "启动 windows_exporter"
    • 触发器:选择 "当计算机启动时"
    • 操作:选择 "启动程序"
    • 程序或脚本:浏览选择刚才创建的D:\tools\start_exporter.bat
    • 完成:勾选 "当单击完成时,打开此任务属性的对话框"

start_exporter.bat

bash 复制代码
@echo off

SET SERVER_NAME=windows_exporter-0.31.3-amd64.exe
for /F "tokens=1,2,*" %%a in ('tasklist /FI "IMAGENAME eq %SERVER_NAME%"  /NH') do (
	echo "%%a"
	echo "%%b"
  echo %date% %time%
  taskkill /PID %%b /F
)

echo Sleeping for 5 seconds...
for /L %%i in (5,-1,1) do (
	set /p "=%%i, " <nul
	timeout /t 1 /nobreak >nul
)

cd /d D:\tools
start /b .\%SERVER_NAME% --collectors.enabled cpu,logical_disk,memory
相关推荐
奈斯ing2 小时前
【prometheus+Grafana篇】避坑指南:实践中常见问题与解决方案总结整理(持续更新...)
运维·grafana·prometheus·1024程序员节
观测云2 小时前
通过 Grafana 使用 PromQL 查询分析观测云数据最佳实践
grafana
风清再凯3 小时前
02_prometheus监控&Grafana展示
prometheus·1024程序员节
运维帮手大橙子3 小时前
Docker监控系统中添加NodeExporter
linux·运维
susu10830189114 小时前
FAT32/VFAT 文件系统不支持 Linux 文件权限,cp文件总是异常
linux·运维·服务器
絔离4 小时前
Linux下查看系统启动时间、运行时间
linux·运维·服务器
七夜zippoe4 小时前
Xshell效率实战三:SSH管理秘籍——自动化脚本与宏命令进阶指南
运维·自动化·ssh
KevinPedri5 小时前
测试:uk8s创建监控和告警同步飞书等渠道
docker·kubernetes·云计算·1024程序员节
GIS数据转换器5 小时前
城市基础设施安全运行监管平台
大数据·运维·人工智能·物联网·安全·无人机·1024程序员节