Smallest sensible memory footprint for small database?

I have a small system with a small number of users that is currently on Cassandra, I use it for the safety of the replication (2 instances on two cheap VPS’s). There is no more than 300Mb of data in the entire thing with maybe a max of 1 million records and maybe an average of 60 queries per minute only in peak times, and it’s not really going to grow much over time (I regularly clean out unnecessary data).

Would it be ok, or inadvisable, to try and switch to Scylla DB with a 512Mb memory limit (on a 1Gb vps). I’ve been using Cassandra for a while now, and its in need of a second upgrade, which I never enjoy, so its the time to think about Scylla DB again.

I see no reason, why your workload wouldn’t work with ScyllaDB. There is a good chance that all or most of your data would fit into cache and thus be served quickly.

Question: why would you want to limit memory to 512MB on a 1GB VPS? Why not let ScylaDB use all the memory?

Thanks, thats great! The number was just an estimate. For now I just wanted to check if this basic hypothetical setup was not crazy before I started investigating and testing our setup with Scylla DB in more detail. Perhaps we can allocate more. (We have a simple rust web app on each server, smtp, and imap as well. I just guesstimated what might work.)

1 Like

Ah I see. Note that ScyllaDB does not like neighbors much, by default it acts like it owns the machine it runs on. There are options to tune this. For starters, I recommend partitioning the CPU cores between ScyllaDB and the other applications, making sure ScyllaDB does not have to share CPU cores with other long running server apps. You can use the --cpuset command line option for ScyllaDB to restrict it to certain CPU cores. You can check the documentation of the other services you wish to run on how to achieve the same, or you can use systemd (I’m almost sure it has some options for this) or taskset to achieve this if they don’t have native support for this.

Thankyou, thats helpful, I’ll make sure to look into those settings.

Another note. As far as I know, the minimum amount of memory we test ScyllaDB with, is 256MB/CPU core. Below this amount of memory, you might hit unexpected problems. A more comfortable amount is 512MB/CPU core. Just something to keep in mind when planning resource distribution.

Don’t forget to configure swap on you machines, otherwise the kernel will kill ScyllaDB as the largest memory consumer, if other services accidentally consume all remaining memory.

1 Like