Info | |
---|---|
|
00. 사전조건
Info | ||
---|---|---|
| ||
Prerequisites
|
01. 개요
Info |
---|
Kubernetes-operatorSummary
DependenciesBy default this chart installs additional, dependent charts: Guide for kube-prometheus-stack with Helm3.x |
02. Namespace 생성
Code Block |
---|
sansae@win10pro-worksp:$ kubectl create ns monitor-po |
03. Install Chart of kube-prometheus-stack
ServiceMonitor
Summary
DependenciesBy default this chart installs additional, dependent charts: Guide for kube-prometheus-stack with Helm3.x |
02. 사전조건
Info | ||
---|---|---|
| ||
Prerequisites
|
03. Namespace 생성
Code Block |
---|
sansae@win10pro-worksp:$ kubectl create ns monitor-po |
04. Install Chart of kube-prometheus-stack
Code Block |
---|
sansae@win10pro-worksp:$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" already exists with the same configuration, skipping
sansae@win10pro-worksp:$
sansae@win10pro-worksp:$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "ingress-nginx" chart repository
...Successfully got an update from the "elastic" chart repository
...Successfully got an update from the "dynatrace" chart repository
...Successfully got an update from the "prometheus-community" chart repository
Update Complete. ⎈Happy Helming!⎈
sansae@win10pro-worksp:$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitor-po
NAME: kube-prometheus-stack
LAST DEPLOYED: Wed Mar 31 10:30:38 2021
NAMESPACE: monitor-po
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitor-po get pods -l "release=kube-prometheus-stack"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
sansae@win10pro-worksp:$ kubectl get all -n monitor-po
NAME READY STATUS RESTARTS AGE
pod/alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 0 91s
pod/kube-prometheus-stack-grafana-6b5c8fd86c-lwcv2 2/2 Running 0 93s
pod/kube-prometheus-stack-kube-state-metrics-7877f4cc7c-b2nnc 1/1 Running 0 93s
pod/kube-prometheus-stack-operator-5859b9c949-4n24x 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-5f4pm 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-5fbc7 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-ggj8c 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-h5cfj 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-hvpsf 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-mbt54 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-s5zd9 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-v7bsj 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-v7sts 1/1 Running 0 93s
pod/kube-prometheus-stack-prometheus-node-exporter-vnmx5 1/1 Running 0 93s
pod/prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 1 91s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 91s
service/kube-prometheus-stack-alertmanager ClusterIP 10.0.92.245 <none> 9093/TCP
93s
service/kube-prometheus-stack-grafana ClusterIP 10.0.240.51 <none> 80/TCP
93s
service/kube-prometheus-stack-kube-state-metrics |
Code Block |
sansae@win10pro-worksp:$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts "prometheus-community" already exists with the same configuration, skipping sansae@win10pro-worksp:$ sansae@win10pro-worksp:$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "ingress-nginx" chart repository ...Successfully got an update from the "elastic" chart repository ...Successfully got an update from the "dynatrace" chart repository ...Successfully got an update from the "prometheus-community" chart repository Update Complete. ⎈Happy Helming!⎈ sansae@win10pro-worksp:$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitor-po NAME: kube-prometheus-stack LAST DEPLOYED: Wed Mar 31 10:30:38 2021 NAMESPACE: monitor-po STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace monitor-po get pods -l "release=kube-prometheus-stack" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. sansae@win10pro-worksp:$ kubectl get all -n monitor-po NAME ClusterIP 10.0.47.252 <none> 8080/TCP 93s service/kube-prometheus-stack-operator ClusterIP 10.0.215.243 <none> READY STATUS443/TCP RESTARTS AGE93s podservice/alertmanager-kube-prometheus-stack-alertmanager-0prometheus 2/2 RunningClusterIP 10.0.152.193 <none> 9090/TCP 91s93s podservice/kube-prometheus-stack-grafanaprometheus-6b5c8fd86cnode-lwcv2exporter ClusterIP 10.0.216.169 <none> 2/2 9100/TCP Running 0 93s service/prometheus-operated 93s pod/kube-prometheus-stack-kube-state-metrics-7877f4cc7c-b2nnc 1/1 Running 0 93s pod/kube-prometheus-stack-operator-5859b9c949-4n24xClusterIP None 1/1 <none> Running 0 9090/TCP 91s NAME 93s pod/kube-prometheus-stack-prometheus-node-exporter-5f4pm 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-5fbc7 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-ggj8c DESIRED 1/1 CURRENT RunningREADY 0UP-TO-DATE AVAILABLE NODE SELECTOR 93sAGE poddaemonset.apps/kube-prometheus-stack-prometheus-node-exporter-h5cfj 10 1/110 Running 010 10 93s pod/kube-prometheus-stack-prometheus-node-exporter-hvpsf 10 1/1 Running 0<none> 93s pod/kube-prometheus-stack-prometheus-node-exporter-mbt54 NAME 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-s5zd9 1/1 Running 0 READY 93s podUP-TO-DATE AVAILABLE AGE deployment.apps/kube-prometheus-stack-prometheus-node-exporter-v7bsjgrafana 1/1 Running1 01 93s poddeployment.apps/kube-prometheus-stack-prometheuskube-nodestate-exporter-v7stsmetrics 1/1 1/1 Running 1 0 93s poddeployment.apps/kube-prometheus-stack-prometheus-node-exporter-vnmx5operator 1/1 Running1/1 0 1 93s pod/prometheus-kube-prometheus-stack-prometheus-0 1 2/293s NAME Running 1 91s NAME DESIRED CURRENT TYPE READY CLUSTER-IPAGE replicaset.apps/kube-prometheus-stack-grafana-6b5c8fd86c EXTERNAL-IP PORT(S) AGE service/alertmanager-operated1 1 1 93s replicaset.apps/kube-prometheus-stack-kube-state-metrics-7877f4cc7c ClusterIP1 None 1 <none> 1 9093/TCP,9094/TCP,9094/UDP 91s93s servicereplicaset.apps/kube-prometheus-stack-operator-alertmanager5859b9c949 1 ClusterIP 10.0.92.245 <none> 9093/TCP 1 93s service/kube-prometheus-stack-grafana1 93s NAME ClusterIP 10.0.240.51 <none> 80/TCP 93s service/kube-prometheus-stack-kube-state-metrics ClusterIP 10.0.47.252 <none> 8080/TCP READY 93sAGE servicestatefulset.apps/alertmanager-kube-prometheus-stack-operator -alertmanager 1/1 ClusterIP 10.0.215.243 91s statefulset.apps/prometheus-kube-prometheus-stack-prometheus <none> 443/TCP 1/1 93s service/kube-prometheus-stack-prometheus 91s sansae@win10pro-worksp:$ |
05. Volume 설정
05-01. StorageClass 확인
- Prometheus Stacks에서 사용가능한 StorageClass는 AzureDisk이므로 'default 또는 managed-premium'을 사용해야 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl get sc NAME ClusterIP PROVISIONER 10.0.152.193 <none> 9090/TCP RECLAIMPOLICY 93s service/kube-prometheus-stack-prometheus-node-exporterVOLUMEBINDINGMODE ClusterIP 10.0.216.169ALLOWVOLUMEEXPANSION <none>AGE azurefile 9100/TCP 93s service/prometheus-operatedkubernetes.io/azure-file Delete Immediate ClusterIP Nonetrue <none> 9090/TCP 65d azurefile-premium kubernetes.io/azure-file Delete 91s NAME Immediate true 65d default (default) kubernetes.io/azure-disk Delete DESIREDImmediate CURRENT READY UP-TO-DATE AVAILABLE true NODE SELECTOR AGE daemonset.apps/kube-prometheus-stack-prometheus-node-exporter 10 10 65d managed 10 10 kubernetes.io/azure-disk Delete 10 WaitForFirstConsumer true <none> 93s NAME 30d managed-premium kubernetes.io/azure-disk Delete Immediate true READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-prometheus-stack-grafana 65d |
05-02. Prometheus Volume
- Prometheus의 storage에 storageClassName: managed-premium를 추가 합니다.
- storage 사이즈도 프로젝트스펙에 맞게 적절하게 수정 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl get prometheus -n monitor-pga NAME 1/1 1 1 VERSION REPLICAS 93sAGE deployment.apps/kube-prometheus-stack-kube-state-metricsprometheus 1/1 v2.24.0 1 1 93s deployment.apps/kube-prometheus-stack-operator 1/1 1 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit prometheus kube-prometheus-stack-prometheus -n monitor-po ============================================================= 156 storage: 157 volumeClaimTemplate: 158 1 spec: 159 93saccessModes: NAME160 - ReadWriteOnce 161 resources: 162 requests: 163 storage: 50Gi 164 DESIRED CURRENT READY AGE replicaset.appsstorageClassName: managed-premium ============================================================= prometheus.monitoring.coreos.com/kube-prometheus-stack-grafana-6b5c8fd86c prometheus edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME 1 1 1 93s replicaset.apps/kube-prometheus-stack-kube-state-metrics-7877f4cc7c 1 1 1 93s replicaset.apps/kube-prometheus-stack-operator-5859b9c949 1 1 1 STATUS 93s NAME VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0 Bound READY AGE statefulset.apps/alertmanager-kube-prometheus-stack-alertmanager 1/1pvc-64f88c7f-6fad-4d66-b6eb-xxxxxxxxx 50Gi 91s statefulset.apps/prometheus-kube-prometheus-stack-prometheusRWO 1/1 91s sansae@win10pro-worksp:$ |
04. Volume 설정
04-01. StorageClass 확인
...
managed-premium 6d2h |
05-03. Alertmanager Volume
- Alertmanager의 storage에 storageClassName: managed-premium를 추가 합니다.
- storage 사이즈도 프로젝트스펙에 맞게 적절하게 수정 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl get sc alertmanager -n monitor-po NAME PROVISIONER RECLAIMPOLICY VERSION VOLUMEBINDINGMODE REPLICAS ALLOWVOLUMEEXPANSION AGE azurefilekube-prometheus-stack-alertmanager kubernetes.io/azure-filev0.21.0 Delete1 Immediate true 65d azurefile-premium kubernetes.io/azure-file Delete 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit alertmanager kube-prometheus-stack-alertmanager -n monitor-po ============================================================= storage: volumeClaimTemplate: spec: Immediate accessModes: - ReadWriteOnce true resources: 65d default (default)requests: kubernetes.io/azure-disk Delete Immediate true storage: 2Gi 65d managed kubernetes.io/azure-disk DeletestorageClassName: managed-premium ============================================================= alertmanager.monitoring.coreos.com/kube-prometheus-stack-alertmanager edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME WaitForFirstConsumer true 30d managed-premium kubernetes.io/azure-disk Delete Immediate true 65d |
04-02. Prometheus Volume
- Prometheus의 storage에 storageClassName: managed-premium를 추가 합니다.
- storage 사이즈도 프로젝트스펙에 맞게 적절하게 수정 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl edit prometheus kube-prometheus-stack-prometheus -n monitor-po ============================================================= 156 storage: 157 volumeClaimTemplate: 158 STATUS VOLUME spec: 159 CAPACITY accessModes: 160ACCESS MODES STORAGECLASS - ReadWriteOnce 161AGE alertmanager-kube-prometheus-stack-alertmanager-db-alertmanager-kube-prometheus-stack-alertmanager-0 Bound pvc-5e7b02eb-7109-417d-9c14-xxxxxxxx resources: 162 2Gi RWO requests: 163 managed-premium storage: 50Gi 164 storageClassName: managed-premium6d2h |
05-04. Grafana Volume
- Grafana는 Prometheus-operator로 구성된 리소스가 아닙니다.
- 따라서, PVC를 수동으로 생성하고, Deployment에 PVC를 사용하도록 수정해 줍니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ vim grafana-pvc.yaml ============================================================= prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus edited |
04-03. Alertmanager Volume
- Alertmanager의 storage에 storageClassName: managed-premium를 추가 합니다.
- storage 사이즈도 프로젝트스펙에 맞게 적절하게 수정 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl edit alertmanager kube-prometheus-stack-alertmanager -n monitor-po apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pv-claim labels: app: grafana spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: managed-premium ================================================================== 85 storage: 86===== sansae@win10pro-worksp:/workspaces$ kubectl create -f grafana-pvc.yaml create pvc!!! sansae@win10pro-worksp:/workspaces$ kubectl get deploy -n monitor-pga NAME volumeClaimTemplate: 87 spec: 88 storageClassName: managed-premium 89READY UP-TO-DATE AVAILABLE accessModes: ["ReadWriteOnce"] 90AGE kube-prometheus-stack-grafana resources: 91 1/1 1 requests: 92 1 storage: 2Gi ============================================================= alertmanager.monitoring.coreos.com/ 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit deploy kube-prometheus-stack-alertmanagergrafana edited |
04-04. Grafana Volume
- Grafana는 Prometheus-operator로 구성된 리소스가 아닙니다.
- 따라서, PVC를 수동으로 생성하고, Deployment에 PVC를 사용하도록 수정해 줍니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ cat grafana-pvc.yaml -n monitor-po ================================================================ apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pv-claim labels: app: grafana spec: accessModes: ===== volumeMounts: - mountPath: /var/lib/grafana name: grafana-persistent-storage ------------------------------------------------------- volumes: - ReadWriteOnce resources:name: grafana-persistent-storage requests: storagepersistentVolumeClaim: 1Gi storageClassName: managed-premium ============================================================= sansae@win10pro-worksp:/workspaces$ kubectl create -f grafana-pvc.yaml create pvc!!! sansae@win10pro-worksp:/workspaces$ kubectl edit deploy kube-prometheus-stack-grafana -n monitor-po claimName: grafana-pv-claim ============================================================= 400====== deployment.apps/kube-prometheus-stack-grafana edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME STATUS VOLUME - mountPath: /var/lib/grafana 401CAPACITY ACCESS MODES STORAGECLASS name: grafana-persistent-storage ------------------------------------------------------- 453 AGE grafana-pv-claim Bound - name: grafana-persistent-storage 454 pvc-a13463fa-ebda-4632-a9c0-xxxxxxxx 1Gi persistentVolumeClaim: 455 RWO claimName: grafana-pv-claim ============================================================= deployment.apps/kube-prometheus-stack-grafana edited |
...
managed-premium 6d2h |
06. Service Connection
...
06-01. Prometheus Operated
Code Block |
---|
sansae@win10pro-worksp:$ kubectl get svc -n monitor-po NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 30m kube-prometheus-stack-alertmanager ClusterIP 10.0.92.245 <none> 9093/TCP 30m kube-prometheus-stack-grafana ClusterIP 10.0.240.51 <none> 80/TCP 30m kube-prometheus-stack-kube-state-metrics ClusterIP 10.0.47.252 <none> 8080/TCP 30m kube-prometheus-stack-operator ClusterIP 10.0.215.243 <none> 443/TCP 30m kube-prometheus-stack-prometheus ClusterIP 10.0.152.193 <none> 9090/TCP 30m kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.0.216.169 <none> 9100/TCP 30m prometheus-operated ClusterIP None <none> 9090/TCP 30m sansae@win10pro-worksp:$ kubectl port-forward service/prometheus-operated 9090 -n monitor-po Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090 |
...
06-02. Grafana
Code Block |
---|
sansae@win10pro-worksp:$ kubectl port-forward service/kube-prometheus-stack-grafana 8000:80 -n monitor-po Forwarding from 127.0.0.1:8000 -> 3000 Forwarding from [::1]:8000 -> 3000 Handling connection for 8000 |
- default user/password is admin/prom-operator
...
06-03. AlertManager
Code Block |
---|
sansae@win10pro-worksp:$ kubectl port-forward service/kube-prometheus-stack-alertmanager 9093:9093 -n monitor-po
Forwarding from 127.0.0.1:9093 -> 9093
Forwarding from [::1]:9093 -> 9093 |
참고: 삭제할 경우 namespace이외에 삭제 해야할 리소스
-> 9093
Forwarding from [::1]:9093 -> 9093 |
참고: 삭제할 경우 namespace이외에 삭제 해야할 리소스
Code Block |
---|
for n in $(kubectl get namespaces -o jsonpath={..metadata.name}); do
kubectl delete --all --namespace=$n prometheus,servicemonitor,podmonitor,alertmanager
done
|
Code Block |
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com kubectl delete crd alertmanagers.monitoring.coreos.com kubectl delete crd podmonitors.monitoring.coreos.com kubectl delete crd probes.monitoring.coreos.com kubectl delete crd prometheuses.monitoring.coreos.com kubectl delete crd prometheusrules.monitoring.coreos.com kubectl delete crd servicemonitors.monitoring.coreos.com kubectl delete crd thanosrulers.monitoring.coreos.com kubectl delete clusterrole kube-prometheus-stack-grafana-clusterrole kubectl delete clusterrole kube-prometheus-stack-kube-state-metrics kubectl delete clusterrole kube-prometheus-stack-operator kubectl delete clusterrole kube-prometheus-stack-operator-psp kubectl delete clusterrole kube-prometheus-stack-prometheus kubectl delete clusterrole kube-prometheus-stack-prometheus-psp kubectl delete clusterrole psp-kube-prometheus-stack-kube-state-metrics kubectl delete clusterrole psp-kube-prometheus-stack-prometheus-node-exporter kubectl delete clusterrolebinding kube-prometheus-stack-grafana-clusterrolebinding kubectl delete clusterrolebinding kube-prometheus-stack-kube-state-metrics kubectl delete clusterrolebinding kube-prometheus-stack-operator kubectl delete clusterrolebinding kube-prometheus-stack-operator-psp kubectl delete clusterrolebinding kube-prometheus-stack-prometheus kubectl delete clusterrolebinding kube-prometheus-stack-prometheus-psp kubectl delete clusterrolebinding psp-kube-prometheus-stack-kube-state-metrics kubectl delete clusterrolebinding psp-kube-prometheus-stack-prometheus-node-exporter kubectl delete svc kube-prometheus-stack-coredns -n kube-system kubectl delete svc kube-prometheus-stack-kube-controller-manager -n kube-system kubectl delete svc kube-prometheus-stack-kube-etcd -n kube-system kubectl delete svc kube-prometheus-stack-kube-proxy -n kube-system kubectl delete svc kube-prometheus-stack-kube-scheduler -n kube-system kubectl delete svc kube-prometheus-stack-kubelet -n kube-system kubectl delete svc prometheus-kube-prometheus-kubelet -n kube-system kubectl delete MutatingWebhookConfiguration kube-prometheus-stack-admission kubectl delete ValidatingWebhookConfiguration kubekube-prometheus-stack-admission |
참조:
Info |
---|
https://containerjournal.com/topics/container-management/cluster-monitoring-with-prometheus- stack-admission |
참조:
https://waspro.tistory.com/588 |