Kubernetes-operator
ServiceMonitor
Summary
DependenciesBy default this chart installs additional, dependent charts: Guide for kube-prometheus-stack with Helm3.x |
Prerequisites
|
sansae@win10pro-worksp:$ kubectl create ns monitor-po |
sansae@win10pro-worksp:$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts "prometheus-community" already exists with the same configuration, skipping sansae@win10pro-worksp:$ sansae@win10pro-worksp:$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "ingress-nginx" chart repository ...Successfully got an update from the "elastic" chart repository ...Successfully got an update from the "dynatrace" chart repository ...Successfully got an update from the "prometheus-community" chart repository Update Complete. ⎈Happy Helming!⎈ sansae@win10pro-worksp:$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitor-po NAME: kube-prometheus-stack LAST DEPLOYED: Wed Mar 31 10:30:38 2021 NAMESPACE: monitor-po STATUS: deployed REVISION: 1 NOTES: kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace monitor-po get pods -l "release=kube-prometheus-stack" Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. sansae@win10pro-worksp:$ kubectl get all -n monitor-po NAME READY STATUS RESTARTS AGE pod/alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 0 91s pod/kube-prometheus-stack-grafana-6b5c8fd86c-lwcv2 2/2 Running 0 93s pod/kube-prometheus-stack-kube-state-metrics-7877f4cc7c-b2nnc 1/1 Running 0 93s pod/kube-prometheus-stack-operator-5859b9c949-4n24x 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-5f4pm 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-5fbc7 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-ggj8c 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-h5cfj 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-hvpsf 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-mbt54 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-s5zd9 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-v7bsj 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-v7sts 1/1 Running 0 93s pod/kube-prometheus-stack-prometheus-node-exporter-vnmx5 1/1 Running 0 93s pod/prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 1 91s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 91s service/kube-prometheus-stack-alertmanager ClusterIP 10.0.92.245 <none> 9093/TCP 93s service/kube-prometheus-stack-grafana ClusterIP 10.0.240.51 <none> 80/TCP 93s service/kube-prometheus-stack-kube-state-metrics ClusterIP 10.0.47.252 <none> 8080/TCP 93s service/kube-prometheus-stack-operator ClusterIP 10.0.215.243 <none> 443/TCP 93s service/kube-prometheus-stack-prometheus ClusterIP 10.0.152.193 <none> 9090/TCP 93s service/kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.0.216.169 <none> 9100/TCP 93s service/prometheus-operated ClusterIP None <none> 9090/TCP 91s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/kube-prometheus-stack-prometheus-node-exporter 10 10 10 10 10 <none> 93s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/kube-prometheus-stack-grafana 1/1 1 1 93s deployment.apps/kube-prometheus-stack-kube-state-metrics 1/1 1 1 93s deployment.apps/kube-prometheus-stack-operator 1/1 1 1 93s NAME DESIRED CURRENT READY AGE replicaset.apps/kube-prometheus-stack-grafana-6b5c8fd86c 1 1 1 93s replicaset.apps/kube-prometheus-stack-kube-state-metrics-7877f4cc7c 1 1 1 93s replicaset.apps/kube-prometheus-stack-operator-5859b9c949 1 1 1 93s NAME READY AGE statefulset.apps/alertmanager-kube-prometheus-stack-alertmanager 1/1 91s statefulset.apps/prometheus-kube-prometheus-stack-prometheus 1/1 91s sansae@win10pro-worksp:$ |
sansae@win10pro-worksp:/workspaces$ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE azurefile kubernetes.io/azure-file Delete Immediate true 65d azurefile-premium kubernetes.io/azure-file Delete Immediate true 65d default (default) kubernetes.io/azure-disk Delete Immediate true 65d managed kubernetes.io/azure-disk Delete WaitForFirstConsumer true 30d managed-premium kubernetes.io/azure-disk Delete Immediate true 65d |
sansae@win10pro-worksp:/workspaces$ kubectl get prometheus -n monitor-pga NAME VERSION REPLICAS AGE kube-prometheus-stack-prometheus v2.24.0 1 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit prometheus kube-prometheus-stack-prometheus -n monitor-po ============================================================= 156 storage: 157 volumeClaimTemplate: 158 spec: 159 accessModes: 160 - ReadWriteOnce 161 resources: 162 requests: 163 storage: 50Gi 164 storageClassName: managed-premium ============================================================= prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0 Bound pvc-64f88c7f-6fad-4d66-b6eb-xxxxxxxxx 50Gi RWO managed-premium 6d2h |
sansae@win10pro-worksp:/workspaces$ kubectl get alertmanager -n monitor-po NAME VERSION REPLICAS AGE kube-prometheus-stack-alertmanager v0.21.0 1 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit alertmanager kube-prometheus-stack-alertmanager -n monitor-po ============================================================= storage: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: managed-premium ============================================================= alertmanager.monitoring.coreos.com/kube-prometheus-stack-alertmanager edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE alertmanager-kube-prometheus-stack-alertmanager-db-alertmanager-kube-prometheus-stack-alertmanager-0 Bound pvc-5e7b02eb-7109-417d-9c14-xxxxxxxx 2Gi RWO managed-premium 6d2h |
sansae@win10pro-worksp:/workspaces$ vim grafana-pvc.yaml ============================================================= apiVersion: v1 kind: PersistentVolumeClaim metadata: name: grafana-pv-claim labels: app: grafana spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: managed-premium ============================================================= sansae@win10pro-worksp:/workspaces$ kubectl create -f grafana-pvc.yaml create pvc!!! sansae@win10pro-worksp:/workspaces$ kubectl get deploy -n monitor-pga NAME READY UP-TO-DATE AVAILABLE AGE kube-prometheus-stack-grafana 1/1 1 1 6d3h sansae@win10pro-worksp:/workspaces$ kubectl edit deploy kube-prometheus-stack-grafana -n monitor-po ============================================================= volumeMounts: - mountPath: /var/lib/grafana name: grafana-persistent-storage ------------------------------------------------------- volumes: - name: grafana-persistent-storage persistentVolumeClaim: claimName: grafana-pv-claim ============================================================= deployment.apps/kube-prometheus-stack-grafana edited sansae@win10pro-worksp:/workspaces$ kubectl get pvc -n monitor-po NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE grafana-pv-claim Bound pvc-a13463fa-ebda-4632-a9c0-xxxxxxxx 1Gi RWO managed-premium 6d2h |
sansae@win10pro-worksp:$ kubectl get svc -n monitor-po NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 30m kube-prometheus-stack-alertmanager ClusterIP 10.0.92.245 <none> 9093/TCP 30m kube-prometheus-stack-grafana ClusterIP 10.0.240.51 <none> 80/TCP 30m kube-prometheus-stack-kube-state-metrics ClusterIP 10.0.47.252 <none> 8080/TCP 30m kube-prometheus-stack-operator ClusterIP 10.0.215.243 <none> 443/TCP 30m kube-prometheus-stack-prometheus ClusterIP 10.0.152.193 <none> 9090/TCP 30m kube-prometheus-stack-prometheus-node-exporter ClusterIP 10.0.216.169 <none> 9100/TCP 30m prometheus-operated ClusterIP None <none> 9090/TCP 30m sansae@win10pro-worksp:$ kubectl port-forward service/prometheus-operated 9090 -n monitor-po Forwarding from 127.0.0.1:9090 -> 9090 Forwarding from [::1]:9090 -> 9090 |
sansae@win10pro-worksp:$ kubectl port-forward service/kube-prometheus-stack-grafana 8000:80 -n monitor-po Forwarding from 127.0.0.1:8000 -> 3000 Forwarding from [::1]:8000 -> 3000 Handling connection for 8000 |
sansae@win10pro-worksp:$ kubectl port-forward service/kube-prometheus-stack-alertmanager 9093:9093 -n monitor-po Forwarding from 127.0.0.1:9093 -> 9093 Forwarding from [::1]:9093 -> 9093 |
for n in $(kubectl get namespaces -o jsonpath={..metadata.name}); do kubectl delete --all --namespace=$n prometheus,servicemonitor,podmonitor,alertmanager done kubectl delete crd alertmanagerconfigs.monitoring.coreos.com kubectl delete crd alertmanagers.monitoring.coreos.com kubectl delete crd podmonitors.monitoring.coreos.com kubectl delete crd probes.monitoring.coreos.com kubectl delete crd prometheuses.monitoring.coreos.com kubectl delete crd prometheusrules.monitoring.coreos.com kubectl delete crd servicemonitors.monitoring.coreos.com kubectl delete crd thanosrulers.monitoring.coreos.com kubectl delete clusterrole kube-prometheus-stack-grafana-clusterrole kubectl delete clusterrole kube-prometheus-stack-kube-state-metrics kubectl delete clusterrole kube-prometheus-stack-operator kubectl delete clusterrole kube-prometheus-stack-operator-psp kubectl delete clusterrole kube-prometheus-stack-prometheus kubectl delete clusterrole kube-prometheus-stack-prometheus-psp kubectl delete clusterrole psp-kube-prometheus-stack-kube-state-metrics kubectl delete clusterrole psp-kube-prometheus-stack-prometheus-node-exporter kubectl delete clusterrolebinding kube-prometheus-stack-grafana-clusterrolebinding kubectl delete clusterrolebinding kube-prometheus-stack-kube-state-metrics kubectl delete clusterrolebinding kube-prometheus-stack-operator kubectl delete clusterrolebinding kube-prometheus-stack-operator-psp kubectl delete clusterrolebinding kube-prometheus-stack-prometheus kubectl delete clusterrolebinding kube-prometheus-stack-prometheus-psp kubectl delete clusterrolebinding psp-kube-prometheus-stack-kube-state-metrics kubectl delete clusterrolebinding psp-kube-prometheus-stack-prometheus-node-exporter kubectl delete svc kube-prometheus-stack-coredns -n kube-system kubectl delete svc kube-prometheus-stack-kube-controller-manager -n kube-system kubectl delete svc kube-prometheus-stack-kube-etcd -n kube-system kubectl delete svc kube-prometheus-stack-kube-proxy -n kube-system kubectl delete svc kube-prometheus-stack-kube-scheduler -n kube-system kubectl delete svc kube-prometheus-stack-kubelet -n kube-system kubectl delete svc prometheus-kube-prometheus-kubelet -n kube-system kubectl delete MutatingWebhookConfiguration kube-prometheus-stack-admission kubectl delete ValidatingWebhookConfiguration kube-prometheus-stack-admission |
https://waspro.tistory.com/588 |