Originally from the User Slack
@serg: Hi to everyone!
I have the monitoring stack that works in Docker.
But adding nodes up to 5 for 2 clusters I see some slowness and even misses of objects in Grafana - for example, only 3 nodes of a cluster I see in the dashbord and after refresh I see 3 another ones, or 4 or 2. And such behavour for many objects in Grafana, seems as lack of resources.
-
On the server with containers I see about 15% of available memory (from 32 Gb) and about 3-5% of CPU load. So the server itself has plenty of resources.
The container with Prometheus consumes 2+ Gb, so I doubt there are memore limits for containers. Are they and where?
-
For ScyllaDB instance for monitoring I see such startup options (from ps aux):
/usr/bin/scylla --memory 250M --log-to-syslog 0 --log-to-stdout 1 --default-log-level info --network-stack posix --developer-mode=1 --smp 1
But cannot find where these parameters are wrintten, I’d like to change them - maybe it helps to encrease performance and rendering in Grafana
.
Could you help me how to increase monitoring performance?
@avi: The scylla instance serves scylla-manager, not monitoring
@serg: Yes, Avi, you’re right. I have Manager on the same host. Sorry, for miseading.
Would you recommend some settings for monitoring of many servers/clusters? May be some default docker parameters are not enough?
@avi: I’m not an expert in monitoring, you can try giving it more memory and cpu.
@serg: Thank you. I’ll try.
By the way, It’s interesting where scylla parameters are written - in scylla* service files on /lib/systemd/system/ I’ve not found memory limit to 250M.
Thanks for help.