...
Info | ||
---|---|---|
Kubernetes-operator
ServiceMonitor
Summary
DependenciesBy default this chart installs additional, dependent charts: Guide for kube-prometheus-stack with Helm3.x |
...
- Prometheus Stacks에서 사용가능한 StorageClass는 AzureDisk이므로 'default 또는 managed-premium'을 사용해야 합니다.
...
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl editget prometheus -n monitor-pga NAME VERSION REPLICAS AGE kube-prometheus-stack-prometheus -n monitor-po v2.24.0 1 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit prometheus kube-prometheus-stack-prometheus -n monitor-po ============================================================= 156 storage: 157 volumeClaimTemplate: 158 spec: 159 accessModes: 160 - ReadWriteOnce 161 resources: 162 requests: 163 storage: 50Gi 164 storageClassName: managed-premium ============================================================= prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus edited |
05-03. Alertmanager Volume
- Alertmanager의 storage에 storageClassName: managed-premium를 추가 합니다.
- storage 사이즈도 프로젝트스펙에 맞게 적절하게 수정 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl editget alertmanagerpvc kube-prometheus-stack-alertmanager -n monitor-po ============================================================= 85 storage: 86NAME volumeClaimTemplate: 87 spec: 88 storageClassName: managed-premium 89 accessModes: ["ReadWriteOnce"] 90 resources: 91 requests: 92 STATUS storage: 2Gi ============================================================= alertmanager.monitoring.coreos.com/kube-prometheus-stack-alertmanager edited |
05-04. Grafana Volume
- Grafana는 Prometheus-operator로 구성된 리소스가 아닙니다.
- 따라서, PVC를 수동으로 생성하고, Deployment에 PVC를 사용하도록 수정해 줍니다.
VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0 Bound pvc-64f88c7f-6fad-4d66-b6eb-xxxxxxxxx 50Gi RWO managed-premium 6d2h |
05-03. Alertmanager Volume
- Alertmanager의 storage에 storageClassName: managed-premium를 추가 합니다.
- storage 사이즈도 프로젝트스펙에 맞게 적절하게 수정 합니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ kubectl get alertmanager -n monitor-po
NAME VERSION REPLICAS AGE
kube-prometheus-stack-alertmanager v0.21.0 1 6d3h
sansae@win10pro-worksp:/workspaces$ kubectl edit alertmanager kube-prometheus-stack-alertmanager -n monitor-po
=============================================================
storage:
volumeClaimTemplate:
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: managed-premium
=============================================================
alertmanager.monitoring.coreos.com/kube-prometheus-stack-alertmanager edited
sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
alertmanager-kube-prometheus-stack-alertmanager-db-alertmanager-kube-prometheus-stack-alertmanager-0 Bound pvc-5e7b02eb-7109-417d-9c14-xxxxxxxx 2Gi RWO managed-premium 6d2h |
05-04. Grafana Volume
- Grafana는 Prometheus-operator로 구성된 리소스가 아닙니다.
- 따라서, PVC를 수동으로 생성하고, Deployment에 PVC를 사용하도록 수정해 줍니다.
Code Block |
---|
sansae@win10pro-worksp:/workspaces$ vim grafana-pvc.yaml
=============================================================
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pv-claim
labels:
app: grafana
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: managed-premium
=============================================================
sansae@win10pro-worksp:/workspaces$ kubectl create -f grafana-pvc.yaml
create pvc!!!
sansae@win10pro-worksp:/workspaces$ kubectl get deploy -n monitor-pga
NAME READY UP-TO-DATE AVAILABLE AGE
kube-prometheus-stack-grafana 1/1 1 1 6d3h
sansae@win10pro-worksp:/workspaces$ kubectl edit deploy kube-prometheus-stack-grafana -n monitor-po
= |
Code Block |
sansae@win10pro-worksp:/workspaces$ cat grafana-pvc.yaml ============================================================= apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pv-claim labels: app: grafana spec: accessModes: ======= volumeMounts: - mountPath: /var/lib/grafana name: grafana-persistent-storage ------------------------------------------------------- volumes: - ReadWriteOnce resources:name: grafana-persistent-storage requests: storagepersistentVolumeClaim: 1Gi storageClassName: managed-premium ============================================================= sansae@win10pro-worksp:/workspaces$ kubectl create -f grafana-pvc.yaml create pvc!!! sansae@win10pro-worksp:/workspaces$ kubectl edit deploy kube-prometheus-stack-grafana -n monitor-po claimName: grafana-pv-claim ============================================================== 400========== deployment.apps/kube-prometheus-stack-grafana edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME STATUS VOLUME - mountPath: /var/lib/grafana 401CAPACITY ACCESS MODES STORAGECLASS name: AGE grafana-persistent-storage ------------------------------------------------------- 453pv-claim Bound - name: grafanapvc-a13463fa-persistent-storage 454 ebda-4632-a9c0-xxxxxxxx 1Gi persistentVolumeClaim: 455 RWO claimName: grafana-pv-claim ============================================================= deployment.apps/kube-prometheus-stack-grafana edited managed-premium 6d2h |
06. Service Connection
06-01. Prometheus Operated
...
Code Block |
---|
sansae@win10pro-worksp:$ kubectl port-forward service/kube-prometheus-stack-alertmanager 9093:9093 -n monitor-po Forwarding from 127.0.0.1:9093 -> 9093 Forwarding from [::1]:9093 -> 9093 |
참고: 삭제할 경우 namespace이외에 삭제 해야할 리소스
참고: 삭제할 경우 namespace이외에 삭제 해야할 리소스
Code Block |
---|
for n in $(kubectl get namespaces -o jsonpath={..metadata.name}); do
kubectl delete --all --namespace=$n prometheus,servicemonitor,podmonitor,alertmanager
done
|
Code Block |
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com kubectl delete crd alertmanagers.monitoring.coreos.com kubectl delete crd podmonitors.monitoring.coreos.com kubectl delete crd probes.monitoring.coreos.com kubectl delete crd prometheuses.monitoring.coreos.com kubectl delete crd prometheusrules.monitoring.coreos.com kubectl delete crd servicemonitors.monitoring.coreos.com kubectl delete crd thanosrulers.monitoring.coreos.com kubectl delete clusterrole kube-prometheus-stack-grafana-clusterrole kubectl delete clusterrole kube-prometheus-stack-kube-state-metrics kubectl delete clusterrole kube-prometheus-stack-operator kubectl delete clusterrole kube-prometheus-stack-operator-psp kubectl delete clusterrole kube-prometheus-stack-prometheus kubectl delete clusterrole kube-prometheus-stack-prometheus-psp kubectl delete clusterrole psp-kube-prometheus-stack-kube-state-metrics kubectl delete clusterrole psp-kube-prometheus-stack-prometheus-node-exporter kubectl delete clusterrolebinding kube-prometheus-stack-grafana-clusterrolebinding kubectl delete clusterrolebinding kube-prometheus-stack-kube-state-metrics kubectl delete clusterrolebinding kube-prometheus-stack-operator kubectl delete clusterrolebinding kube-prometheus-stack-operator-psp kubectl delete clusterrolebinding kube-prometheus-stack-prometheus kubectl delete clusterrolebinding kube-prometheus-stack-prometheus-psp kubectl delete clusterrolebinding psp-kube-prometheus-stack-kube-state-metrics kubectl delete clusterrolebinding psp-kube-prometheus-stack-prometheus-node-exporter kubectl delete svc kube-prometheus-stack-coredns -n kube-system kubectl delete svc kube-prometheus-stack-kube-controller-manager -n kube-system kubectl delete svc kube-prometheus-stack-kube-etcd -n kube-system kubectl delete svc kube-prometheus-stack-kube-proxy -n kube-system kubectl delete svc kube-prometheus-stack-kube-scheduler -n kube-system kubectl delete svc kube-prometheus-stack-kubelet -n kube-system kubectl delete svc prometheus-kube-prometheus-kubelet -n kube-system kubectl delete MutatingWebhookConfiguration kube-prometheus-stack-admission kubectl delete ValidatingWebhookConfiguration kube-prometheus-stack-admission |
...
Info |
---|
https://waspro.tistory.com/588 |