I think I might be seeing this bug in my test bed where I am using 4.6.5 release and the open file descriptors just keeps on increasing for scylla read/write. I am using STCS and I see a lot of small sstables created. Over the period of 1 week, the open file descriptors just keeps on increasing exponentially. So, much so that I get “Too many open files” error. And eventuall scylla just stops taking any client request. Is anyone encountering this 4.6 release? or do you recommend to upgrade to 5 release? I don’t see a point upgrading since this bug attached is still open. Any suggestions?
Hello Ken, re #8170, how is resharding involved in your case? Did you configure STCS with non-default values? What is min_threshold set to?
Also, how many files are we talking about?
It might be possible that you just need to raise the limit to accommodate for scylla’s needs.
The reason is that for performance-related reasons, scylla keeps two open files per SSTable for their data and index components.
I am using default values with STCS. Didn’t change min_threshold value or set inside scylla.yaml.
I increased ulimit to 1.6 million but after few days I see scylla open file reaches that limit. I’m on 4.6.11.
I don’t understand this behavior
For the last couple of days, I have stopped running repairs and the situation seems to be under control. I don’t see scylla’s growing number of open file descriptors. So, looks like that running nodetool repair if run manually causes the situation to get worst.
Do we need to run nodetool repair? Or does the system take care of keeping the data consistent across replicas?