19.3 打镜像部署到k8s中,prometheus配置采集并在grafana看图

本节重点介绍 :

  • 打镜像,导出镜像,传输到各个节点并导入
  • 运行该项目
  • 配置prometheus和grafana

打镜像

本地build

shell 复制代码
docker build -t ink8s-pod-metrics:v1 .

build过程

导出镜像

shell 复制代码
docker save  ink8s-pod-metrics > ink8s-pod-metrics.tar 

传输到各个node节点上

shell 复制代码
scp ink8s-pod-metrics.tar k8s-node01:~

各个node节点上导入镜像

使用docker

shell 复制代码
docker load < ink8s-pod-metrics.tar

使用containerd

shell 复制代码
ctr --namespace k8s.io images import ink8s-pod-metrics.tar

运行该项目

shell 复制代码
 kubectl apply -f rbac.yaml
 kubectl apply -f deployment.yaml

检查

shell 复制代码
[root@k8s-master01 ink8s-pod-metrics]# kubectl get pod -o wide 
NAME                                           READY   STATUS    RESTARTS   AGE    IP              NODE         NOMINATED NODE   READINESS GATES
grafana-d5d85bcd6-f74ch                        1/1     Running   0          3d9h   10.100.85.199   k8s-node01   <none>           <none>
grafana-d5d85bcd6-l44mx                        1/1     Running   0          3d9h   10.100.85.198   k8s-node01   <none>           <none>
ink8s-pod-metrics-deployment-85d9795d6-95lsp   1/1     Running   0          13m    10.100.85.207   k8s-node01   <none>           <none>
  • 日志
shell 复制代码
[root@k8s-master01 ink8s-pod-metrics]# kubectl logs -l app=ink8s-pod-metrics  -f  
2021-08-23 20:34:35.377256 INFO app/get_k8s_objs.go:128 [pod.label:map[component:etcd tier:control-plane]]
2021-08-23 20:34:35.377266 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-apiserver tier:control-plane]]
2021-08-23 20:34:35.377274 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-controller-manager tier:control-plane]]
2021-08-23 20:34:35.377292 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:35.377299 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:35.377317 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-scheduler tier:control-plane]]
2021-08-23 20:34:35.377324 INFO app/get_k8s_objs.go:128 [pod.label:map[app.kubernetes.io/name:kube-state-metrics app.kubernetes.io/version:v1.9.7 pod-template-hash:564668c858]]
2021-08-23 20:34:35.377331 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:metrics-server pod-template-hash:7dbf6c4558]]
2021-08-23 20:34:35.377336 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:prometheus-5b9cdcfd6c k8s-app:prometheus statefulset.kubernetes.io/pod-name:prometheus-0]]
2021-08-23 20:34:35.377358 INFO app/get_k8s_objs.go:143 server_pod_ips_result][num_pod:11][time_took_seconds:6.189551999]
2021-08-23 20:34:39.197614 INFO app/get_k8s_objs.go:107 server_node_ips_result][num_node:2][time_took_seconds:0.009575987]
2021-08-23 20:34:39.200824 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:kube-dns pod-template-hash:68b9d7b887]]
2021-08-23 20:34:39.200857 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:kube-dns pod-template-hash:68b9d7b887]]
2021-08-23 20:34:39.200871 INFO app/get_k8s_objs.go:128 [pod.label:map[component:etcd tier:control-plane]]
2021-08-23 20:34:39.200889 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-apiserver tier:control-plane]]
2021-08-23 20:34:39.200903 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-controller-manager tier:control-plane]]
2021-08-23 20:34:39.200920 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:39.200934 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:85c698c6d4 k8s-app:kube-proxy pod-template-generation:1]]
2021-08-23 20:34:39.200947 INFO app/get_k8s_objs.go:128 [pod.label:map[component:kube-scheduler tier:control-plane]]
2021-08-23 20:34:39.200961 INFO app/get_k8s_objs.go:128 [pod.label:map[app.kubernetes.io/name:kube-state-metrics app.kubernetes.io/version:v1.9.7 pod-template-hash:564668c858]]
2021-08-23 20:34:39.200981 INFO app/get_k8s_objs.go:128 [pod.label:map[k8s-app:metrics-server pod-template-hash:7dbf6c4558]]
2021-08-23 20:34:39.200992 INFO app/get_k8s_objs.go:128 [pod.label:map[controller-revision-hash:prometheus-5b9cdcfd6c k8s-app:prometheus statefulset.kubernetes.io/pod-name:prometheus-0]]
2021-08-23 20:34:39.201022 INFO app/get_k8s_objs.go:143 server_pod_ips_result][num_pod:11][time_took_seconds:0.013052527]

node上请求 pod 的metrics

  • curl pod的ip:8080/metrics
shell 复制代码
[root@k8s-master01 ink8s-pod-metrics]# curl -s 10.100.85.207:8080/metrics |grep ink8s
# HELP ink8s_pod_metrics_get_node_detail k8s node detail each
# TYPE ink8s_pod_metrics_get_node_detail gauge
ink8s_pod_metrics_get_node_detail{containerRuntimeVersion="containerd://1.4.4",hostname="k8s-master01",ip="172.20.70.205",kubeletVersion="v1.20.1"} 1
ink8s_pod_metrics_get_node_detail{containerRuntimeVersion="containerd://1.4.4",hostname="k8s-node01",ip="172.20.70.215",kubeletVersion="v1.20.1"} 1
# HELP ink8s_pod_metrics_get_node_last_duration_seconds get node last duration seconds
# TYPE ink8s_pod_metrics_get_node_last_duration_seconds gauge
ink8s_pod_metrics_get_node_last_duration_seconds 0.008066143
# HELP ink8s_pod_metrics_get_pod_control_plane_pod_detail k8s pod detail of control plane
# TYPE ink8s_pod_metrics_get_pod_control_plane_pod_detail gauge
ink8s_pod_metrics_get_pod_control_plane_pod_detail{component="etcd",ip="172.20.70.205",pod_name="etcd-k8s-master01"} 1
ink8s_pod_metrics_get_pod_control_plane_pod_detail{component="kube-apiserver",ip="172.20.70.205",pod_name="kube-apiserver-k8s-master01"} 1
ink8s_pod_metrics_get_pod_control_plane_pod_detail{component="kube-controller-manager",ip="172.20.70.205",pod_name="kube-controller-manager-k8s-master01"} 1
ink8s_pod_metrics_get_pod_control_plane_pod_detail{component="kube-scheduler",ip="172.20.70.205",pod_name="kube-scheduler-k8s-master01"} 1
# HELP ink8s_pod_metrics_get_pod_last_duration_seconds get pod last duration seconds
# TYPE ink8s_pod_metrics_get_pod_last_duration_seconds gauge
ink8s_pod_metrics_get_pod_last_duration_seconds 0.01159838

prometheus target页面检查pod

  • 发现pod已经发现到了,但是 报错:向http的server发送https的请求

prometheus和grafana配置

检查 kubernetes-pods的job

  • 如果之前配置的https,需要改为http的
yaml 复制代码
- job_name: kubernetes-pods
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics
  scheme: http
  follow_redirects: true
  relabel_configs:
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
    separator: ;
    regex: "true"
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
    separator: ;
    regex: (.+)
    target_label: __metrics_path__
    replacement: $1
    action: replace
  - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
    separator: ;
    regex: ([^:]+)(?::\d+)?;(\d+)
    target_label: __address__
    replacement: $1:$2
    action: replace
  - separator: ;
    regex: __meta_kubernetes_pod_label_(.+)
    replacement: $1
    action: labelmap
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: kubernetes_namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: kubernetes_pod_name
    replacement: $1
    action: replace
  kubernetes_sd_configs:
  - role: pod
    kubeconfig_file: ""
    follow_redirects: true

在prometheus中检查指标

  • 查询 ink8s_pod_metrics_get_node_detail
shell 复制代码
ink8s_pod_metrics_get_node_detail{app="ink8s-pod-metrics", containerRuntimeVersion="containerd://1.4.4", hostname="k8s-master01", instance="10.100.85.207:8080", ip="172.20.70.205", job="kubernetes-pods", kubeletVersion="v1.20.1", kubernetes_namespace="default", kubernetes_pod_name="ink8s-pod-metrics-deployment-85d9795d6-95lsp", pod_template_hash="85d9795d6"}
1
ink8s_pod_metrics_get_node_detail{app="ink8s-pod-metrics", containerRuntimeVersion="containerd://1.4.4", hostname="k8s-node01", instance="10.100.85.207:8080", ip="172.20.70.215", job="kubernetes-pods", kubeletVersion="v1.20.1", kubernetes_namespace="default", kubernetes_pod_name="ink8s-pod-metrics-deployment-85d9795d6-95lsp", pod_template_hash="85d9795d6"}

配置grafana

  • 举例图片

本节重点总结 :

  • 打镜像,导出镜像,传输到各个节点并导入
  • 运行该项目
  • 配置prometheus和grafana
相关推荐
木鱼时刻15 小时前
容器与 Kubernetes 基本概念与架构
容器·架构·kubernetes
码上淘金17 小时前
【Prometheus 】通过 Pushgateway 上报指标数据
prometheus
chuanauc1 天前
Kubernets K8s 学习
java·学习·kubernetes
庸子2 天前
基于Jenkins和Kubernetes构建DevOps自动化运维管理平台
运维·kubernetes·jenkins
李白你好2 天前
高级运维!Kubernetes(K8S)常用命令的整理集合
运维·容器·kubernetes
Connie14512 天前
k8s多集群管理中的联邦和舰队如何理解?
云原生·容器·kubernetes
伤不起bb2 天前
Kubernetes 服务发布基础
云原生·容器·kubernetes
别骂我h2 天前
Kubernetes服务发布基础
云原生·容器·kubernetes
weixin_399380692 天前
k8s一键部署tongweb企业版7049m6(by why+lqw)
java·linux·运维·服务器·云原生·容器·kubernetes
博同学2 天前
Nginx + ELK + Grafana 全球访问热力图
nginx·elk·grafana