失眠网,内容丰富有趣,生活中的好帮手!
失眠网 > Kubernetes高可用性监控:Thanos的部署

Kubernetes高可用性监控:Thanos的部署

时间:2020-01-23 10:11:10

相关推荐

Kubernetes高可用性监控:Thanos的部署

原文发表于kubernetes中文社区,为作者原创翻译,原文地址

更多kubernetes文章,请多关注kubernetes中文社区

目录

介绍

对Prometheus高可用性的需求

实现

Thanos架构​

Thanos包含以下组件:

重复数据删除

Thanos搭建

先决条件

组件部署

部署Prometheus中的ServiceAccount资源对象,分别创建Clusterrole和Clusterrolebinding

部署Prometheus配置文件configmap.yaml

部署prometheus-rules的configmap 这将创建我们的警报规则,该警报规则将中继到alertmanager进行交付

部署Prometheus的StatefulSet资源

部署Prometheus服务

部署Thanos Querier

部署Thanos Store Gateway

部署Thanos Ruler

部署Alertmanager

部署Kubestate指标

部署Node-Exporter Daemonset

部署Grafana

部署Ingress对象

结论

介绍

对Prometheus高可用性的需求

在过去的几个月中,Kubernetes的采用已经增长了很多倍,现在很明显,Kubernetes是容器编排的事实标准。

同时,监视是任何基础架构的重要方面。Prometheus被认为是监视容器应用和非容器应用的绝佳选择。我们应该确保监视系统具有高可用性和高度可扩展性,以适应不断增长的基础架构的需求,尤其是在Kubernetes的情况下。

因此,今天,我们将部署Prometheus集群,它不仅可以抵抗节点故障,而且还可以确保数据归档以备将来参考。我们的集群也具有很大的可扩展性,以至于我们可以在同一监控系统内跨越多个Kubernetes集群。

当前方案

大多数Prometheus部署都使用具备持久性存储的Pod,而Prometheus使用联邦进行扩展。但是,并非所有数据都可以使用联邦进行聚合,在添加其他服务器时,通常需要一种机制来管理Prometheus配置。

解决方案

Thanos旨在解决上述问题。在Thanos的帮助下,我们不仅可以扩展Prometheus实例,并能够消除重复数据,还可以将数据归档在GCS或S3等持久性存储中。

实现

Thanos架构

Thanos包含以下组件:

Thanos Sidecar:这是在Prometheus运行的主要组件。它读取并存储object store中的数据。此外,它管理Prometheus的配置和生命周期。为了区分每个Prometheus实例,Sidecar组件将外部标签注入Prometheus配置中。Sidecar组件能够在Prometheus服务器的PromQL接口上运行查询。Sidecar组件还侦听Thanos gRPC协议,并在gRPC查询和REST查询之间转换。Thanos Store:此组件在object store中的历史数据之上实现Store API。它主要充当API网关,因此不需要大量的本地磁盘空间。它在启动时加入Thanos集群,并公布它可以访问的数据。它会在本地磁盘上保留有关所有远程块的少量信息,并使它与object store保持同步。通常,在重新启动时可以安全地删除此数据,但会增加启动时间。Thanos Query:是个查询组件,负责侦听HTTP并将查询转换为Thanos gRPC格式。它汇总了来自不同来源的查询结果,并且可以从Sidecar和Store中读取数据。在高可用性设置中,它甚至可以对重复数据进行删除。

重复数据删除

Prometheus是有状态的,不允许复制其数据库。这意味着通过运行多个Prometheus副本来增强高可用性并不是最佳选择。

简单的负载平衡将不起作用,例如,在发生崩溃后,副本可能已启动,但是查询此类副本将导致其关闭期间的间隙很小。你有第二个副本可能正在运行,但是又可能在另一时间关闭(例如,滚动重启),因此在这些副本上进行负载平衡将无法正常工作。

相反,Thanos Querier从两个副本中提取数据,并对这些信号进行重复数据删除,从而帮助Querier使用者填补了空白。Thanos Compact:是Thanos的压缩器组件,它采用Prometheus 2.0存储引擎的压缩过程,来阻止数据存储在object store中。通常,它以单例方式部署。 它还负责数据的向下采样(downsampling)-40小时后执行5m的向下采样,而10天后执行1h的向下采样。Thanos Ruler:它基本上与Prometheus的rules具有相同的作用。唯一的区别是它可以与Thanos组件进行通信。

Thanos搭建

先决条件

为了完全理解本教程,需要以下内容:

Kubernetes的工作原理和熟练使用KubectlKubernetes集群至少有3个节点(在本演示中,使用GKE集群)实现Ingress Controller和Ingress对象(出于演示目的,使用Nginx Ingress Controller)。尽管这不是强制性的,但强烈建议你这样做以减少外部端点创建的数量。创建供Thanos组件访问object store的凭证(在本例中为GCS存储)创建2个GCS存储,并将其命名为prometheus-long-term和thanos-ruler创建一个角色为“ 存储对象管理员”的服务帐户将密钥文件下载保存为JSON格式,并将其命名为thanos-gcs-credentials.json使用secret创建kubernetes密钥 kubectl create secret generic thanos-gcs-credentials --from-file=thanos-gcs-credentials.json -n monitoring

组件部署

部署Prometheus中的ServiceAccount资源对象,分别创建Clusterrole和Clusterrolebinding

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: v1kind: ServiceAccountmetadata:name: monitoringnamespace: monitoring---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:name: monitoringnamespace: monitoringrules:- apiGroups: [""]resources:- nodes- nodes/proxy- services- endpoints- podsverbs: ["get", "list", "watch"]- apiGroups: [""]resources:- configmapsverbs: ["get"]- nonResourceURLs: ["/metrics"]verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: monitoringsubjects:- kind: ServiceAccountname: monitoringnamespace: monitoringroleRef:kind: ClusterRoleName: monitoringapiGroup: rbac.authorization.k8s.io---

上述清单创建监控命名空间和服务ServiceAccount。

部署Prometheus配置文件configmap.yaml

apiVersion: v1kind: ConfigMapmetadata:name: prometheus-server-conflabels:name: prometheus-server-confnamespace: monitoringdata:prometheus.yaml.tmpl: |-global:scrape_interval: 5sevaluation_interval: 5sexternal_labels:cluster: prometheus-ha# Each Prometheus has to have unique labels.replica: $(POD_NAME)​rule_files:- /etc/prometheus/rules/*rules.yaml​alerting:​# We want our alerts to be deduplicated# from different replicas.alert_relabel_configs:- regex: replicaaction: labeldrop​alertmanagers:- scheme: httppath_prefix: /static_configs:- targets: ['alertmanager:9093']​scrape_configs:- job_name: kubernetes-nodes-cadvisorscrape_interval: 10sscrape_timeout: 10sscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenkubernetes_sd_configs:- role: noderelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)# Only for Kubernetes ^1.7.3.# See: /prometheus/prometheus/issues/2916- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisormetric_relabel_configs:- action: replacesource_labels: [id]regex: '^/machine\.slice/machine-rkt\\x2d([^\\]+)\\.+/([^/]+)\.service$'target_label: rkt_container_namereplacement: '${2}-${1}'- action: replacesource_labels: [id]regex: '^/system\.slice/(.+)\.service$'target_label: systemd_service_namereplacement: '${1}'​- job_name: 'kubernetes-pods'kubernetes_sd_configs:- role: podrelabel_configs:- action: labelmapregex: __meta_kubernetes_pod_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_pod_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2​​- job_name: 'kubernetes-apiservers'kubernetes_sd_configs:- role: endpointsscheme: https tls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https​- job_name: 'kubernetes-service-endpoints'kubernetes_sd_configs:- role: endpointsrelabel_configs:- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: (.+)(?::\d+);(\d+)replacement: $1:$2

上面的Configmap创建Prometheus配置文件模板。Thanos sidecar组件将读取此配置文件模板,并将生成实际的配置文件,而该配置文件又将由在同一容器中运行的Prometheus容器使用。

在配置文件中添加external_labels部分非常重要,以使Querier可以基于该部分对重复数据进行删除。

部署prometheus-rules的configmap 这将创建我们的警报规则,该警报规则将中继到alertmanager进行交付

apiVersion: v1kind: ConfigMapmetadata:name: prometheus-ruleslabels:name: prometheus-rulesnamespace: monitoringdata:alert-rules.yaml: |-groups:- name: Deploymentrules:- alert: Deployment at 0 Replicasannotations:summary: Deployment {{$labels.deployment}} in {{$labels.namespace}} is currently having no pods runningexpr: |sum(kube_deployment_status_replicas{pod_template_hash=""}) by (deployment,namespace) < 1for: 1mlabels:team: devops​- alert: HPA Scaling Limited annotations: summary: HPA named {{$labels.hpa}} in {{$labels.namespace}} namespace has reached scaling limited stateexpr: | (sum(kube_hpa_status_condition{condition="ScalingLimited",status="true"}) by (hpa,namespace)) == 1for: 1mlabels: team: devops​- alert: HPA at MaxCapacity annotations: summary: HPA named {{$labels.hpa}} in {{$labels.namespace}} namespace is running at Max Capacityexpr: | ((sum(kube_hpa_spec_max_replicas) by (hpa,namespace)) - (sum(kube_hpa_status_current_replicas) by (hpa,namespace))) == 0for: 1mlabels: team: devops​- name: Podsrules:- alert: Container restartedannotations:summary: Container named {{$labels.container}} in {{$labels.pod}} in {{$labels.namespace}} was restartedexpr: |sum(increase(kube_pod_container_status_restarts_total{namespace!="kube-system",pod_template_hash=""}[1m])) by (pod,namespace,container) > 0for: 0mlabels:team: dev​- alert: High Memory Usage of Container annotations: summary: Container named {{$labels.container}} in {{$labels.pod}} in {{$labels.namespace}} is using more than 75% of Memory Limitexpr: | ((( sum(container_memory_usage_bytes{image!="",container_name!="POD", namespace!="kube-system"}) by (namespace,container_name,pod_name) / sum(container_spec_memory_limit_bytes{image!="",container_name!="POD",namespace!="kube-system"}) by (namespace,container_name,pod_name) ) * 100 ) < +Inf ) > 75for: 5mlabels: team: dev​- alert: High CPU Usage of Container annotations: summary: Container named {{$labels.container}} in {{$labels.pod}} in {{$labels.namespace}} is using more than 75% of CPU Limitexpr: | ((sum(irate(container_cpu_usage_seconds_total{image!="",container_name!="POD", namespace!="kube-system"}[30s])) by (namespace,container_name,pod_name) / sum(container_spec_cpu_quota{image!="",container_name!="POD", namespace!="kube-system"} / container_spec_cpu_period{image!="",container_name!="POD", namespace!="kube-system"}) by (namespace,container_name,pod_name) ) * 100) > 75for: 5mlabels: team: dev​- name: Nodesrules:- alert: High Node Memory Usageannotations:summary: Node {{$labels.kubernetes_io_hostname}} has more than 80% memory used. Plan Capcityexpr: |(sum (container_memory_working_set_bytes{id="/",container_name!="POD"}) by (kubernetes_io_hostname) / sum (machine_memory_bytes{}) by (kubernetes_io_hostname) * 100) > 80for: 5mlabels:team: devops​- alert: High Node CPU Usageannotations:summary: Node {{$labels.kubernetes_io_hostname}} has more than 80% allocatable cpu used. Plan Capacity.expr: |(sum(rate(container_cpu_usage_seconds_total{id="/", container_name!="POD"}[1m])) by (kubernetes_io_hostname) / sum(machine_cpu_cores) by (kubernetes_io_hostname) * 100) > 80for: 5mlabels:team: devops​- alert: High Node Disk Usageannotations:summary: Node {{$labels.kubernetes_io_hostname}} has more than 85% disk used. Plan Capacity.expr: |(sum(container_fs_usage_bytes{device=~"^/dev/[sv]d[a-z][1-9]$",id="/",container_name!="POD"}) by (kubernetes_io_hostname) / sum(container_fs_limit_bytes{container_name!="POD",device=~"^/dev/[sv]d[a-z][1-9]$",id="/"}) by (kubernetes_io_hostname)) * 100 > 85for: 5mlabels:team: devops

部署Prometheus的StatefulSet资源

apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata:name: fastnamespace: monitoringprovisioner: kubernetes.io/gce-pdallowVolumeExpansion: true---apiVersion: apps/v1beta1kind: StatefulSetmetadata:name: prometheusnamespace: monitoringspec:replicas: 3serviceName: prometheus-servicetemplate:metadata:labels:app: prometheusthanos-store-api: "true"spec:serviceAccountName: monitoringcontainers:- name: prometheusimage: prom/prometheus:v2.4.3args:- "--config.file=/etc/prometheus-shared/prometheus.yaml"- "--storage.tsdb.path=/prometheus/"- "--web.enable-lifecycle"- "--storage.tsdb.no-lockfile"- "--storage.tsdb.min-block-duration=2h"- "--storage.tsdb.max-block-duration=2h"ports:- name: prometheuscontainerPort: 9090volumeMounts:- name: prometheus-storagemountPath: /prometheus/- name: prometheus-config-sharedmountPath: /etc/prometheus-shared/- name: prometheus-rulesmountPath: /etc/prometheus/rules- name: thanosimage: quay.io/thanos/thanos:v0.8.0args:- "sidecar"- "--log.level=debug"- "--tsdb.path=/prometheus"- "--prometheus.url=http://127.0.0.1:9090"- "--objstore.config={type: GCS, config: {bucket: prometheus-long-term}}"- "--reloader.config-file=/etc/prometheus/prometheus.yaml.tmpl"- "--reloader.config-envsubst-file=/etc/prometheus-shared/prometheus.yaml"- "--reloader.rule-dir=/etc/prometheus/rules/"env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name : GOOGLE_APPLICATION_CREDENTIALSvalue: /etc/secret/thanos-gcs-credentials.jsonports:- name: http-sidecarcontainerPort: 10902- name: grpccontainerPort: 10901livenessProbe:httpGet:port: 10902path: /-/healthyreadinessProbe:httpGet:port: 10902path: /-/readyvolumeMounts:- name: prometheus-storagemountPath: /prometheus- name: prometheus-config-sharedmountPath: /etc/prometheus-shared/- name: prometheus-configmountPath: /etc/prometheus- name: prometheus-rulesmountPath: /etc/prometheus/rules- name: thanos-gcs-credentialsmountPath: /etc/secretreadOnly: falsesecurityContext:fsGroup: 2000runAsNonRoot: truerunAsUser: 1000volumes:- name: prometheus-configconfigMap:defaultMode: 420name: prometheus-server-conf- name: prometheus-config-sharedemptyDir: {}- name: prometheus-rulesconfigMap:name: prometheus-rules- name: thanos-gcs-credentialssecret:secretName: thanos-gcs-credentialsvolumeClaimTemplates:- metadata:name: prometheus-storagenamespace: monitoringspec:accessModes: [ "ReadWriteOnce" ]storageClassName: fastresources:requests:storage: 20Gi

上面提供的清单,重要的是要了解以下几个方面:

Prometheus部署为具有3个副本的StatefulSet,每个副本动态地配置自己的持久存储卷。Thanos sidecar容器使用我们在上面创建的模板文件,生成Prometheus配置信息。Thanos需要处理数据压缩,因此我们需要设置--storage.tsdb.min-block-duration = 2h和--storage.tsdb.max-block-duration = 2hPrometheusStatefulSet被打上thanos-store-api:true的标签, 因此每个headless 服务都会发现每个Pod,我们将在下面的Service资源中创建它。Thanos Querier将使用此headless服务来查询所有Prometheus实例中的数据。我们还将相同的标签(thanos-store-api:true)应用于Thanos Store和Thanos Ruler组件,以便Querier也会发现它们并将其用于查询指标。使用GOOGLE_APPLICATION_CREDENTIALS环境变量提供了GCS存储凭据路径。这个凭据是我们创建secret获得的。

部署Prometheus服务

apiVersion: v1kind: Servicemetadata: name: prometheus-0-serviceannotations: prometheus.io/scrape: "true"prometheus.io/port: "9090"namespace: monitoringlabels:name: prometheusspec:selector: statefulset.kubernetes.io/pod-name: prometheus-0ports: - name: prometheus port: 8080targetPort: prometheus---apiVersion: v1kind: Servicemetadata: name: prometheus-1-serviceannotations: prometheus.io/scrape: "true"prometheus.io/port: "9090"namespace: monitoringlabels:name: prometheusspec:selector: statefulset.kubernetes.io/pod-name: prometheus-1ports: - name: prometheus port: 8080targetPort: prometheus---apiVersion: v1kind: Servicemetadata: name: prometheus-2-serviceannotations: prometheus.io/scrape: "true"prometheus.io/port: "9090"namespace: monitoringlabels:name: prometheusspec:selector: statefulset.kubernetes.io/pod-name: prometheus-2ports: - name: prometheus port: 8080targetPort: prometheus---#This service creates a srv record for querier to find about store-api'sapiVersion: v1kind: Servicemetadata:name: thanos-store-gatewaynamespace: monitoringspec:type: ClusterIPclusterIP: Noneports:- name: grpcport: 10901targetPort: grpcselector:thanos-store-api: "true"

我们为StatefulSet中的每个Prometheus Pod创建了不同的服务,这不是必需的,这些仅用于调试目的。上面已经解释了headless服务名称为thanos-store-gateway的目的。稍后我们将使用ingress 对象暴露Prometheus服务。

部署Thanos Querier

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: apps/v1kind: Deploymentmetadata:name: thanos-queriernamespace: monitoringlabels:app: thanos-querierspec:replicas: 1selector:matchLabels:app: thanos-queriertemplate:metadata:labels:app: thanos-querierspec:containers:- name: thanosimage: quay.io/thanos/thanos:v0.8.0args:- query- --log.level=debug- --query.replica-label=replica- --store=dnssrv+thanos-store-gateway:10901ports:- name: httpcontainerPort: 10902- name: grpccontainerPort: 10901livenessProbe:httpGet:port: httppath: /-/healthyreadinessProbe:httpGet:port: httppath: /-/ready---apiVersion: v1kind: Servicemetadata:labels:app: thanos-queriername: thanos-queriernamespace: monitoringspec:ports:- port: 9090protocol: TCPtargetPort: httpname: httpselector:app: thanos-querier

Thanos Querier是Thanos部署的主要组件之一。请注意以下几点:

容器参数--store=dnssrv+thanos-store-gateway:10901有助于从度量标准数据中发现所有组件。thanos-querier服务提供了一个Web界面来运行PromQL查询。它还可以选择在多个Prometheus群集之间删除重复数据。Thanos Querier也是Grafana等所有仪表板的数据源。

部署Thanos Store Gateway

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: apps/v1beta1kind: StatefulSetmetadata:name: thanos-store-gatewaynamespace: monitoringlabels:app: thanos-store-gatewayspec:replicas: 1selector:matchLabels:app: thanos-store-gatewayserviceName: thanos-store-gatewaytemplate:metadata:labels:app: thanos-store-gatewaythanos-store-api: "true"spec:containers:- name: thanosimage: quay.io/thanos/thanos:v0.8.0args:- "store"- "--log.level=debug"- "--data-dir=/data"- "--objstore.config={type: GCS, config: {bucket: prometheus-long-term}}"- "--index-cache-size=500MB"- "--chunk-pool-size=500MB"env:- name : GOOGLE_APPLICATION_CREDENTIALSvalue: /etc/secret/thanos-gcs-credentials.jsonports:- name: httpcontainerPort: 10902- name: grpccontainerPort: 10901livenessProbe:httpGet:port: 10902path: /-/healthyreadinessProbe:httpGet:port: 10902path: /-/readyvolumeMounts:- name: thanos-gcs-credentialsmountPath: /etc/secretreadOnly: falsevolumes:- name: thanos-gcs-credentialssecret:secretName: thanos-gcs-credentials---

这将创建存储组件,该组件存储服务从object store到Querier的指标信息。

部署Thanos Ruler

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: v1kind: ConfigMapmetadata:name: thanos-ruler-rulesnamespace: monitoringdata:alert_down_services.rules.yaml: |groups:- name: metamonitoringrules:- alert: PrometheusReplicaDownannotations:message: Prometheus replica in cluster {{$labels.cluster}} has disappeared from Prometheus target discovery.expr: |sum(up{cluster="prometheus-ha", instance=~".*:9090", job="kubernetes-service-endpoints"}) by (job,cluster) < 3for: 15slabels:severity: critical---apiVersion: apps/v1beta1kind: StatefulSetmetadata:labels:app: thanos-rulername: thanos-rulernamespace: monitoringspec:replicas: 1selector:matchLabels:app: thanos-rulerserviceName: thanos-rulertemplate:metadata:labels:app: thanos-rulerthanos-store-api: "true"spec:containers:- name: thanosimage: quay.io/thanos/thanos:v0.8.0args:- rule- --log.level=debug- --data-dir=/data- --eval-interval=15s- --rule-file=/etc/thanos-ruler/*.rules.yaml- --alertmanagers.url=http://alertmanager:9093- --query=thanos-querier:9090- "--objstore.config={type: GCS, config: {bucket: thanos-ruler}}"- --label=ruler_cluster="prometheus-ha"- --label=replica="$(POD_NAME)"env:- name : GOOGLE_APPLICATION_CREDENTIALSvalue: /etc/secret/thanos-gcs-credentials.json- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.nameports:- name: httpcontainerPort: 10902- name: grpccontainerPort: 10901livenessProbe:httpGet:port: httppath: /-/healthyreadinessProbe:httpGet:port: httppath: /-/readyvolumeMounts:- mountPath: /etc/thanos-rulername: config- name: thanos-gcs-credentialsmountPath: /etc/secretreadOnly: falsevolumes:- configMap:name: thanos-ruler-rulesname: config- name: thanos-gcs-credentialssecret:secretName: thanos-gcs-credentials---apiVersion: v1kind: Servicemetadata:labels:app: thanos-rulername: thanos-rulernamespace: monitoringspec:ports:- port: 9090protocol: TCPtargetPort: httpname: httpselector:app: thanos-ruler

现在,在与我们的工作负载相同的名称空间中的输入以下命令,能够查看到thanos-store-gateway对应有哪些Pod :

root@my-shell-95cb5df57-4q6w8:/# nslookup thanos-store-gatewayServer:10.63.240.10Address: 10.63.240.10#53​Name: thanos-store-gateway.monitoring.svc.cluster.localAddress: 10.60.25.2Name: thanos-store-gateway.monitoring.svc.cluster.localAddress: 10.60.25.4Name: thanos-store-gateway.monitoring.svc.cluster.localAddress: 10.60.30.2Name: thanos-store-gateway.monitoring.svc.cluster.localAddress: 10.60.30.8Name: thanos-store-gateway.monitoring.svc.cluster.localAddress: 10.60.31.2​root@my-shell-95cb5df57-4q6w8:/# exit

上面返回的IP对应于我们的Prometheus中的Pod(thanos-store和thanos-ruler)。

可以通过以下命令验证

$ kubectl get pods -o wide -l thanos-store-api="true"NAME READY STATUS RESTARTS AGE IPNODENOMINATED NODE READINESS GATESprometheus-0 2/2 Running 0100m 10.60.31.2 gke-demo-1-pool-1-649cbe02-jdnv <none><none>prometheus-1 2/2 Running 014h 10.60.30.2 gke-demo-1-pool-1-7533d618-kxkd <none><none>prometheus-2 2/2 Running 031h 10.60.25.2 gke-demo-1-pool-1-4e9889dd-27gc <none><none>thanos-ruler-01/1 Running 0100m 10.60.30.8 gke-demo-1-pool-1-7533d618-kxkd <none><none>thanos-store-gateway-0 1/1 Running 014h 10.60.25.4 gke-demo-1-pool-1-4e9889dd-27gc <none><none>

部署Alertmanager

apiVersion: v1kind: Namespacemetadata:name: monitoring---kind: ConfigMapapiVersion: v1metadata:name: alertmanagernamespace: monitoringdata:config.yml: |-global:resolve_timeout: 5mslack_api_url: "<your_slack_hook>"victorops_api_url: "<your_victorops_hook>"​templates:- '/etc/alertmanager-templates/*.tmpl'route:group_by: ['alertname', 'cluster', 'service']group_wait: 10sgroup_interval: 1mrepeat_interval: 5m receiver: default routes:- match:team: devopsreceiver: devopscontinue: true - match: team: devreceiver: devcontinue: true​receivers:- name: 'default'​- name: 'devops'victorops_configs:- api_key: '<YOUR_API_KEY>'routing_key: 'devops'message_type: 'CRITICAL'entity_display_name: '{{ .CommonLabels.alertname }}'state_message: 'Alert: {{ .CommonLabels.alertname }}. Summary:{{ .CommonAnnotations.summary }}. RawData: {{ .CommonLabels }}'slack_configs:- channel: '#k8-alerts'send_resolved: true​​- name: 'dev'victorops_configs:- api_key: '<YOUR_API_KEY>'routing_key: 'dev'message_type: 'CRITICAL'entity_display_name: '{{ .CommonLabels.alertname }}'state_message: 'Alert: {{ .CommonLabels.alertname }}. Summary:{{ .CommonAnnotations.summary }}. RawData: {{ .CommonLabels }}'slack_configs:- channel: '#k8-alerts'send_resolved: true​---apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: alertmanagernamespace: monitoringspec:replicas: 1selector:matchLabels:app: alertmanagertemplate:metadata:name: alertmanagerlabels:app: alertmanagerspec:containers:- name: alertmanagerimage: prom/alertmanager:v0.15.3args:- '--config.file=/etc/alertmanager/config.yml'- '--storage.path=/alertmanager'ports:- name: alertmanagercontainerPort: 9093volumeMounts:- name: config-volumemountPath: /etc/alertmanager- name: alertmanagermountPath: /alertmanagervolumes:- name: config-volumeconfigMap:name: alertmanager- name: alertmanageremptyDir: {}---apiVersion: v1kind: Servicemetadata:annotations:prometheus.io/scrape: 'true'prometheus.io/path: '/metrics'labels:name: alertmanagername: alertmanagernamespace: monitoringspec:selector:app: alertmanagerports:- name: alertmanagerprotocol: TCPport: 9093targetPort: 9093

alertmanager将根据Prometheus规则生成所有的警报。

部署Kubestate指标

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: rbac.authorization.k8s.io/v1 # kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: kube-state-metricsroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kube-state-metricssubjects:- kind: ServiceAccountname: kube-state-metricsnamespace: monitoring---apiVersion: rbac.authorization.k8s.io/v1# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:name: kube-state-metricsrules:- apiGroups: [""]resources:- configmaps- secrets- nodes- pods- services- resourcequotas- replicationcontrollers- limitranges- persistentvolumeclaims- persistentvolumes- namespaces- endpointsverbs: ["list", "watch"]- apiGroups: ["extensions"]resources:- daemonsets- deployments- replicasetsverbs: ["list", "watch"]- apiGroups: ["apps"]resources:- statefulsetsverbs: ["list", "watch"]- apiGroups: ["batch"]resources:- cronjobs- jobsverbs: ["list", "watch"]- apiGroups: ["autoscaling"]resources:- horizontalpodautoscalersverbs: ["list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1kind: RoleBindingmetadata:name: kube-state-metricsnamespace: monitoringroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kube-state-metrics-resizersubjects:- kind: ServiceAccountname: kube-state-metricsnamespace: monitoring---apiVersion: rbac.authorization.k8s.io/v1# kubernetes versions before 1.8.0 should use rbac.authorization.k8s.io/v1beta1kind: Rolemetadata:namespace: monitoringname: kube-state-metrics-resizerrules:- apiGroups: [""]resources:- podsverbs: ["get"]- apiGroups: ["extensions"]resources:- deploymentsresourceNames: ["kube-state-metrics"]verbs: ["get", "update"]---apiVersion: v1kind: ServiceAccountmetadata:name: kube-state-metricsnamespace: monitoring---apiVersion: apps/v1kind: Deploymentmetadata:name: kube-state-metricsnamespace: monitoringspec:selector:matchLabels:k8s-app: kube-state-metricsreplicas: 1template:metadata:labels:k8s-app: kube-state-metricsspec:serviceAccountName: kube-state-metricscontainers:- name: kube-state-metricsimage: quay.io/mxinden/kube-state-metrics:v1.4.0-gzip.3ports:- name: http-metricscontainerPort: 8080- name: telemetrycontainerPort: 8081readinessProbe:httpGet:path: /healthzport: 8080initialDelaySeconds: 5timeoutSeconds: 5- name: addon-resizerimage: k8s.gcr.io/addon-resizer:1.8.3resources:limits:cpu: 150mmemory: 50Mirequests:cpu: 150mmemory: 50Mienv:- name: MY_POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: MY_POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacecommand:- /pod_nanny- --container=kube-state-metrics- --cpu=100m- --extra-cpu=1m- --memory=100Mi- --extra-memory=2Mi- --threshold=5- --deployment=kube-state-metrics---apiVersion: v1kind: Servicemetadata:name: kube-state-metricsnamespace: monitoringlabels:k8s-app: kube-state-metricsannotations:prometheus.io/scrape: 'true'spec:ports:- name: http-metricsport: 8080targetPort: http-metricsprotocol: TCP- name: telemetryport: 8081targetPort: telemetryprotocol: TCPselector:k8s-app: kube-state-metrics

需要使用Kubestate指标来中继一些重要的容器指标,这些指标不是kubelet本身公开的,因此不能直接用于Prometheus。

部署Node-Exporter Daemonset

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:name: node-exporternamespace: monitoringlabels:name: node-exporterspec:template:metadata:labels:name: node-exporterannotations:prometheus.io/scrape: "true"prometheus.io/port: "9100"spec:hostPID: truehostIPC: truehostNetwork: truecontainers:- name: node-exporterimage: prom/node-exporter:v0.16.0securityContext:privileged: trueargs:- --path.procfs=/host/proc- --path.sysfs=/host/sysports:- containerPort: 9100protocol: TCPresources:limits:cpu: 100mmemory: 100Mirequests:cpu: 10mmemory: 100MivolumeMounts:- name: devmountPath: /host/dev- name: procmountPath: /host/proc- name: sysmountPath: /host/sys- name: rootfsmountPath: /rootfsvolumes:- name: prochostPath:path: /proc- name: devhostPath:path: /dev- name: syshostPath:path: /sys- name: rootfshostPath:path: /

Node-Exporter是Daemonset资源,它在每个节点上运行一个pod -exporter的容器,并公开非常重要的与节点相关的度量标准,这些度量标准可以由Prometheus实例提取。

部署Grafana

apiVersion: v1kind: Namespacemetadata:name: monitoring---apiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata:name: fastnamespace: monitoringprovisioner: kubernetes.io/gce-pdallowVolumeExpansion: true---apiVersion: apps/v1beta1kind: StatefulSetmetadata:name: grafananamespace: monitoringspec:replicas: 1serviceName: grafanatemplate:metadata:labels:task: monitoringk8s-app: grafanaspec:containers:- name: grafanaimage: k8s.gcr.io/heapster-grafana-amd64:v5.0.4ports:- containerPort: 3000protocol: TCPvolumeMounts:- mountPath: /etc/ssl/certsname: ca-certificatesreadOnly: true- mountPath: /varname: grafana-storageenv:- name: GF_SERVER_HTTP_PORTvalue: "3000"# The following env variables are required to make Grafana accessible via# the kubernetes api-server proxy. On production clusters, we recommend# removing these env variables, setup auth for grafana, and expose the grafana# service using a LoadBalancer or a public IP.- name: GF_AUTH_BASIC_ENABLEDvalue: "false"- name: GF_AUTH_ANONYMOUS_ENABLEDvalue: "true"- name: GF_AUTH_ANONYMOUS_ORG_ROLEvalue: Admin- name: GF_SERVER_ROOT_URL# If you're only using the API Server proxy, set this value instead:# value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxyvalue: /volumes:- name: ca-certificateshostPath:path: /etc/ssl/certsvolumeClaimTemplates:- metadata:name: grafana-storagenamespace: monitoringspec:accessModes: [ "ReadWriteOnce" ]storageClassName: fastresources:requests:storage: 5Gi---apiVersion: v1kind: Servicemetadata:labels:kubernetes.io/cluster-service: 'true'kubernetes.io/name: grafananame: grafananamespace: monitoringspec:ports:- port: 3000targetPort: 3000selector:k8s-app: grafana

这将创建我们的Grafana的Deployment和Service资源对象,该Service将 通过我们的Ingress对象公开。

为了将Thanos-Querier添加为Grafana数据源。我们可以这样做:

在Grafana单击Add DataSource名称:DS_PROMETHEUS类型:Prometheus网址:http://thanos-querier:9090点击Save and Test。现在,你可以构建自定义仪表板,也可以直接从导入仪表板。仪表盘#315和#1471是很好的开始。

部署Ingress对象

apiVersion: extensions/v1beta1kind: Ingressmetadata:name: monitoring-ingressnamespace: monitoringannotations:kubernetes.io/ingress.class: "nginx"spec:rules:- host: grafana.<yourdomain>.comhttp:paths:- path: /backend:serviceName: grafanaservicePort: 3000- host: prometheus-0.<yourdomain>.comhttp:paths:- path: /backend:serviceName: prometheus-0-serviceservicePort: 8080- host: prometheus-1.<yourdomain>.comhttp:paths:- path: /backend:serviceName: prometheus-1-serviceservicePort: 8080- host: prometheus-2.<yourdomain>.comhttp:paths:- path: /backend:serviceName: prometheus-2-serviceservicePort: 8080- host: alertmanager.<yourdomain>.comhttp: paths:- path: /backend:serviceName: alertmanagerservicePort: 9093- host: thanos-querier.<yourdomain>.comhttp:paths:- path: /backend:serviceName: thanos-querierservicePort: 9090- host: thanos-ruler.<yourdomain>.comhttp:paths:- path: /backend:serviceName: thanos-rulerservicePort: 9090

这将有助于在Kubernetes集群之外公开我们所有的服务。记得将<yourdomain>替换为你可以访问的域名,并且可以将Ingress-Controller的服务指向该域名。

现在,您应该可以在http://thanos-querier.<yourdomain>.com上访问Thanos Querier 。

它看起来像这样:

可以选择“ deldupication“ 删除重复数据。

如果单击“ Stores ”,则可以看到thanos-store-gateway服务发现的所有活动端点

现在,您将Thanos Querier添加为Grafana中的数据源,并开始创建仪表板

Kubernetes集群监控仪表板

Kubernetes节点监控仪表板

结论

将Thanos与Prometheus集成无疑提供了水平扩展Prometheus的能力,并且由于Thanos-Querier能够从其他查询器实例中提取指标,因此你实际上可以跨集群提取指标,从而在单个仪表板上可视化它们。

我们还能够将度量标准数据存档在object store中,该object store为我们的监视系统提供了无限的存储空间,并提供了来自object store本身的度量。

但是,要实现所有这些,你需要进行大量配置。上面提供的清单已在生产环境中进行了测试。如果你有任何疑问,请随时与我们联系。

译文连接:/articles/high-availability-kubernetes-monitoring-using-prom

如果觉得《Kubernetes高可用性监控:Thanos的部署》对你有帮助,请点赞、收藏,并留下你的观点哦!

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。