Add ServiceMonitor (and PodMonitor) resources instead of JUST prometheus.io/
annotations
#3458
Labels
prometheus.io/
annotations
#3458
Proposed change
Full disclose: This has been asked for >3 years ago in #2029 already. But I strongly believe this is a (still) valid idea and a great improvement to this powerful chart to run JupyterHub on K8s.
I'd like to propose adding ServiceMonitor (https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.ServiceMonitor) and potentially also PodMonitor (https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.PodMonitor) resources to integrate the exposed metrics with Kubernetes environments using the Prometheus Operator (https://github.com/prometheus-operator/prometheus-operator)
While I understand that you don't want to support all sorts of individual configurations, the usage of those Custom Resources to
configure Prometheus is really really common.
If you look across the Kubernetes ecosystem e.g. the Prometheus-Community repo with charts for various exporters, but also also other applications meant to run on Kubernetes ...
The Prometheus Operator has also been adopted by lots of managed solutions, be it with the hyperscalers (Amazon EKS, Google Kubernetes Engine (GKE) Azure Kubernetes Service (AKS), ...) or on premise (VMware Tanzu Kubernetes Grid (TKG), OpenShift, Rancher)
Alternative options
The current set of annotations the likes of
prometheus.io/scrape: true
could remain in place and a new option invalues.yaml
could simply switch from those to dedicatedServiceMonitor
andPodMonitor
custom resources.Who would use this feature?
Everybody using the Prometheus Operator to run their monitoring stack.
(Optional): Suggest a solution
see comment #2029 (comment) by @dmpe including a code snippet.
I gladly push a PR, if you somewhat agree that (optionally) providing Service- and PodMonitors (or active probes even ;-) ) is a good idea.
The text was updated successfully, but these errors were encountered: