Originally from the User Slack
 @Mahdi_Kamali: Do we have Zstd compression in ScyllaDB now?
 @Mahdi_Kamali: Do we have Zstd compression in ScyllaDB now?
The only page I found in doc that tells about compression options is this.
But Zstd is not mentioned
Data Definition | ScyllaDB Docs
 @avi: It’s supported, I’ll fix the documentation
 @avi: It’s supported, I’ll fix the documentation
 @Mahdi_Kamali: Thanks @avi.
 @Mahdi_Kamali: Thanks @avi.
Do you recommend that I change a table (with a lot of blob contents, byte arrays) compression from LZ4 to Zstd ? My goal is to decrease disk space usage.
Is there anything I should consider?
After changing this property, Should I run compaction, upgrade SSTables, etc.?
 @avi: It’s worth to try it. Also consider increasing chunk_size_in_kb (will make reads slower)
 @avi: It’s worth to try it. Also consider increasing chunk_size_in_kb (will make reads slower)
 @Yequ_Sun: I did some test on a sample dataset. With default options the disk space usage is around 104MB. Switching to lz4 it reduces to 75MB. With lz4 and 128kB chunk size it is 16.61MB.
 @Yequ_Sun: I did some test on a sample dataset. With default options the disk space usage is around 104MB. Switching to lz4 it reduces to 75MB. With lz4 and 128kB chunk size it is 16.61MB.
 @avi: Large chunk size will severely impact reads that miss the cache
 @avi: Large chunk size will severely impact reads that miss the cache