I have 3 node scylla cluster with 8 core , 32 Gigs RAM.
I am trying to generate data for some testing. Upon running my data generation scripts, memory shot up to 95%. Even after I have stopped the script, memory is still at 95% even when idle.
I am assuming that somewhere data is cached by scylla.
How to debug this further and how to bring down the memory?
If it is indeed being cached, what is the impact of such high memory usage?
ScyllaDB is designed to use as much of each node’s resources, such as disk, memory, and network, as possible.
For RAM, it will allocate most of it, splitting the space internally between cache, memetable, etc.
Cache data will be served Reads faster and reduce storage access.
yes, ScyllaDB has a different architecture, which maximizes the node resources. This is one of the reasons for ScyllaDB performance
Why do you want to limit the memory in the first place?
ScyllaDB assumes it’s the only application running on a node; if that’s the case, there is no reason to do that.
If you do choose to limit the memory, see here: Scylla Memory Usage | ScyllaDB Docs