Originally from the User Slack
@Tom_Pester: For the scylla monitoring stack, when using the CRD to anable it (more details in thread), is there a way to
- reuse an existing prometheus instalattion on the k8s cluster (the CRD spins up a new one)
- have the option to disable the grafana that it also spins up. We have a central grafana running on another k8s cluster that will host the scylla dashboards
What did I already do?
Thanks for any help. BTW love the CRD that was created for this
apiVersion: <http://scylla.scylladb.com/v1alpha1|scylla.scylladb.com/v1alpha1>
kind: ScyllaDBMonitoring
metadata:
name: scylla-sed
namespace: scylla-sed
spec:
type: Platform
endpointsSelector:
matchLabels:
<http://app.kubernetes.io/name|app.kubernetes.io/name>: scylla
<http://scylla-operator.scylladb.com/scylla-service-type|scylla-operator.scylladb.com/scylla-service-type>: member
scylla/cluster: scylla-sed
components:
prometheus:
storage:
volumeClaimTemplate:
spec:
resources:
requests:
storage: 1Gi
# grafana:
# exposeOptions:
# webInterface:
# ingress:
# ingressClassName: nginx
# dnsDomains:
# - <http://scylla-sed.slgnt.io|scylla-sed.slgnt.io>
# annotations:
# <http://nginx.ingress.kubernetes.io/backend-protocol|nginx.ingress.kubernetes.io/backend-protocol>: HTTP
# <http://nginx.ingress.kubernetes.io/proxy-body-size|nginx.ingress.kubernetes.io/proxy-body-size>: '0'
@Maciej_Zimnoch: so far there’s no support for external prometheus. We plan to add it sooner than later though.
If you want to disable both prometheus and grafana why this CRD is useful? for ServiceMonitors?
@Tom_Pester: Indeed, for the
• ServiceMonitor
• and the alert rules which is also very valuable
Come to think of it, is there a way to let scylla monitoring add and maintain its rules to a pre-existing prometheus?
If there is an update of scylla that fixes some bug to an alert rule, can it even apply this to the pre-existing prometheus? If not, than spinning up a new prometheus, even not resource friendly, is a more clean solution.
@Maciej_Zimnoch: I see.
is there a way to let scylla monitoring add and maintain its rules to a pre-existing prometheus?
we create PrometheusRule resources which contain those alerts, whether they are installed in external prometheus it depends on operator managing it.
But currenly we create those only when we manage Prometheus as well. So in theory you could spawn little instance of Prometheus using ScyllaDBMonitoring to get those PrometheusRule’s and ServiceMonitor’s, and configure your Prometheus to pick them up.
@Tom_Pester: This is what we ended up doing if I understand you correctly@Maciej_Zimnoch
https://github.com/scylladb/scylla-operator/issues/2490#issuecomment-2725600233
^^ Is this also what you had in mind and can you improve on it?
This exercise was a bit daunting for me as I had to learn about prometheus managed alert rules, external alert managers and k8s operators.
At some point I saw 5 possible solutions and could zoom in on the best one.
So the only thing left for us to do is disable the grafana and prometheus that the ScyllaDBMonitoring CR spins up.
Would it be possible add this little functionality so we can disable them from the CR and not have a patch or hack that deletes them afterwards? This would be of great help!
I believe it wasn’t possible to configure the prometheus created by ScyllaDBMonitoring to use an external alertmanager because the ScyllaDBMonitoring CRD doesn’t expose it. Or is there a way that you can reach into the underlying objects? But in the end we pointed to our pre-existing Prometheus that we had full control over.
@amnon We have disabled the prometheus and grafana with a bit of a hack for the moment. Can we influence the ScyllaDBMonitoring CRD in another way to not let he pods exist in the first place?
Thanks for this CRD and the existing monitoring solution. It already set us on the right path.
@Maciej_Zimnoch: No, that’s the only option atm
@Tom_Pester: Thanks for confirming Maciej. Would it be possible to add an enable:true|false key for Prometheus an Grafana?
@Maciej_Zimnoch: it’s in our backlog
@Tom_Pester: Thank you Maciej 