1、基于文件的服务发现
基于文件的服务发现是仅仅略优于静态配置的服务发现方式,它不依赖于任何平台或第三方服务,因而也是最为简单和通用的实现方式。
Prometheus Server 会定期从文件中加载 Target 信息,文件可使用 YAML 和 JSON 格式,它含有定义的 Target 列表,以及可选的标签信息。
创建用于服务发现的文件,在文件中配置所需的 target
cd /usr/local/prometheus
mkdir file_sd
cd file_sd/
vim node-exporter.yaml
- targets:
- 20.0.0.10:9100
labels:
svc: node
cd ..
vim prometheus.yml
- job_name: "node-exporter"
scheme: http
metrics_path: /metrics
file_sd_configs: #指定使用文件服务发现
- files: #指定要加载的文件列表
- file_sd/node-exporter.yaml #文件加载
systemctl restart prometheus.service
如何实现不重载普罗米修斯重载配置文件
vim prometheus.yml
refresh_interval: 1m #添加配置
systemctl restart prometheus.service
cd file_sd/
vim node-exporter.yaml
- 20.0.0.60:9100
添加监控的ip
普罗米修斯服务器
20.0.0.60
2、基于 Consul 的服务发现
Consul 是一款基于 golang 开发的开源工具,主要面向分布式,服务化的系统提供服务注册、服务发现和配置管理的功能。
提供服务注册/发现、健康检查、Key/Value存储、多数据中心和分布式一致性保证等功能。
下载地址:https://www.consul.io/downloads/
20.0.0.20服务器
cd /opt
上传consul_1.9.2_linux_amd64.zip
unzip consul_1.9.2_linux_amd64.zip
mv consul /usr/local/bin/
consul version
cd /usr/local/
mkdir consul
cd consul/
mkdir data conf logs
consul agent \
-server \
-bootstrap \
-ui \
-data-dir=/usr/local/consul/data \
-config-dir=/usr/local/consul/conf \
-bind=20.0.0.20 \
-client=0.0.0.0 \
-node=consul-server01 &> /usr/local/consul/logs/consul.log &
netstat -lntp | grep consul
使用consul发现目标主机
cd /usr/local/consul/conf
vim nodes.json
{
"services": [
{
"id": "node_exporter-node01",
"name": "node01",
"address": "20.0.0.10",
"port": 9100,
"tags": ["nodes"],
"checks": [{
"http": "http://20.0.0.10:9100/metrics",
"interval": "5s"
}]
},
{
"id": "node_exporter-node02",
"name": "node01",
"address": "20.0.0.80",
"port": 9100,
"tags": ["nodes"],
"checks": [{
"http": "http://20.0.0.80:9100/metrics",
"interval": "5s"
}]
}
]
}
consul reload
关联普罗米修斯
普罗米修斯服务器
vim prometheus.yml
- job_name: "node-exporter1"
scheme: http
metrics_path: /metrics
consul_sd_configs:
- server: 20.0.0.20:8500
tags:
- nodes
refresh_interval: 1m 每隔1分钟重新加载一次文件中定义的 Targets,默认为 5m
systemctl restart prometheus.service
如何剔除/添加consul中的主机
consul services deregister -id "node_exporter-node01"
consul services register ./nodes.json
3、基于 Kubernetes API 的服务发现
基于 Kubernetes API 的服务发现机制,支持将 API Server 中 Node、Service、Endpoint、Pod 和 Ingress 等资源类型下相应的各资源对象视作 target, 并持续监视相关资源的变动
●Node、Service、Endpoint、Pod 和 Ingress 资源分别由各自的发现机制进行定义
●负责发现每种类型资源对象的组件,在 Prometheus 中称为一个 role
●支持在集群上基于 DaemonSet 控制器部署 node-exporter 后发现各 Node 节点,也可以通过 kubelet 来作为 Prometheus 发现各 Node 节点的入口
k8s主节点操作
vim rbac.yaml
apiVersion: v1
kind: Namespace
metadata:
name: monitoring
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: outside-prometheus
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: outside-prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- services
- endpoints
- pods
- nodes/proxy
verbs:
- get
- list
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- configmaps
- nodes/metrics
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: outside-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: outside-prometheus
subjects:
- kind: ServiceAccount
name: outside-prometheus
namespace: monitoring
kubectl apply -f rbac.yaml
获取ServiceAccount对应Secret资源对象中保存的token,然后将token保存到Prometheus节点上的文件里。
TOKEN=`kubectl get secret/$(kubectl -n monitoring get secret | awk '/outside-prometheus/{print $1}') -n monitoring -o jsonpath={.data.token} | base64 -d`
echo $TOKEN
cd /etc/kubernetes/pki/
scp ca.crt 20.0.0.80:/usr/local/prometheus
prometheus服务器
cd /usr/local/prometheus
echo eyJhbGciOiJSUzI1NiIsImtpZCI6IklnT2tQYnFwdlRPOGZoRXFKeUxCVDJ1T3o3MXhtUmNYc0NRVU9NcjJxaHcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtb25pdG9yaW5nIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im91dHNpZGUtcHJvbWV0aGV1cy10b2tlbi1ya2NjYyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJvdXRzaWRlLXByb21ldGhldXMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNzZjZjYyNy04YWQzLTQxNGYtYjNlZi04MjExNWYwM2NhZDIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bW9uaXRvcmluZzpvdXRzaWRlLXByb21ldGhldXMifQ.FgXP5RK4cs5zz-r3R6N7iX7c29miEGYLMTXyKgD9roWR6iH99oI47ZrTz2o_u8XoVkAG240x4TbQ6u3Liccb4ad_1_uyHPNSR2P0z0g9u_wW57s8_-eG1F27_xc6TAq9eLwYNtP2LiZIXqaVsuyxuNJ8M2jFmoXJLj2UYROnWngnep6KoRGenedHX-rtfdmFcSUJvzA28nns4_x2yEHHzXMFYwO1YbrZFW_JT89pJIfpSsMp-G0ZglyGl5DMuI7S7Bhw2lltAzugFQqaWzeZvoA63eAqth0aVoLcCaFVzXB0f3tS70eXmhkurV2jTfulD2xunUgnpCxds_JuvwgXNw > k8s-api-token
cat k8s-api-token
k8s主节点
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
prometheus服务器
vim prometheus.yml
- job_name: "k8s-apiserver"
scheme: https
kubernetes_sd_configs:
- role: endpoints
api_server: https://20.0.0.10:6443
tls_config:
ca_file: /usr/local/prometheus/ca.crt
authorization:
credentials_file: /usr/local/prometheus/k8s-api-token
tls_config:
ca_file: /usr/local/prometheus/ca.crt
authorization:
credentials_file: /usr/local/prometheus/k8s-api-token
relabel_configs:
- source_labels: ["__meta_kubernetes_namespace", "__meta_kubernetes_endpoints_name", "__meta_kubernetes_endpoint_port_name"]
regex: default;kubernetes;https
action: keep
- job_name: "kubernetes-nodes"
kubernetes_sd_configs:
- role: node
api_server: https://20.0.0.10:6443
tls_config:
ca_file: /usr/local/prometheus/ca.crt
authorization:
credentials_file: /usr/local/prometheus/k8s-api-token
relabel_configs:
- source_labels: ["__address__"]
regex: (.*):10250
action: replace
target_label: __address__
replacement: $1:9100
- action: labelmap
regex: __meta_kubernetes_node_label_(.+)
systemctl restart prometheus.service