Originally from the User Slack
@Nguyen_Huu_Trung_Ha: Dear experts. How big of data a ScyllaDB node can handle?
@dor: There is no limit, it depends on the rest of the node resources. In a machine like i3en.metal, we can utilize all of the 60TB
It’s good not to exceed a ratio of 1;100 ram:disk
@Nguyen_Huu_Trung_Ha: Thank you for your reply @dor.
I have another question: what is the recommended data size a node can handle well, including: decommissioning, adding nodes, compaction, cleanup, etc in acceptable time.
@dor: It’s primarily a function of your compaction strategy. We recommend having about 30% free space for all of those
@Nguyen_Huu_Trung_Ha: For archiving purposes with seldom/rarely access, can we use machines with less CPU, RAM and big storage, for example: 12 CPU cores, 128GB RAM and 20-60TB storage?
@dor: The CPUs are less of a problem, it’s the ram:disk ratio, at the end it depends on the access pattern, right now the recommendation is 1:100 and we will work to enlarge it this year to even more than you need