High Payload Size issue

Installation details
#ScyllaDB version: 6.1.2-0.20240915.b60f9ef4c223
#Cluster size: 3 nodes
os (RHEL/CentOS/Ubuntu/AWS AMI): Ubuntu

Our payload size is around 3 - 5 kb.

Is there anything we could do to lower the payload size? What are some potential issues it could cause? Could that cause memory issue because we saw in our older cluster it kept getting this error:

Mar 19 21:57:47 scylla-node-01 scylla[6060]:  [shard 12:main] table - failed to write sstable /var/lib/scylla/data/<keyspace>/logs-44fb69b0ca2611efb3b9235c7d5cb03f/me-3gop_1p0b_0rv342wofndsn54rc1-big-Data.db: logalloc::bad_alloc (failed to refill emergency reserve of 30 (have 26 free segments))
Mar 19 21:57:47 scylla-node-01 scylla[6060]:  [shard 12:main] table - Memtable flush failed due to: logalloc::bad_alloc (failed to refill emergency reserve of 30 (have 26 free segments)). Will retry in 10000ms
Mar 19 21:57:47 scylla-node-01 scylla[6060]:  [shard 12:main] table - failed to write sstable /var/lib/scylla/data/<keyspace>/log_details-44619ab0ca2611efb3b9235c7d5cb03f/me-3gop_1p0b_5jbxc2wofndsn54rc1-big-Data.db: logalloc::bad_alloc (failed to refill emergency reserve of 30 (have 26 free segments))
Mar 19 21:57:47 scylla-node-01 scylla[6060]:  [shard 12:main] table - Memtable flush failed due to: logalloc::bad_alloc (failed to refill emergency reserve of 30 (have 26 free segments)). Will retry in 10000ms
Mar 19 21:57:49 scylla-node-01 scylla[6060]:  [shard 12:main] table - failed to w

The memory allocation failures are unrelated to the payload size. Rows or even cells which are 3-5KiB are considered to be of perfectly reasonable size.

The memory errors are caused by something else, I cannot tell just based on these logs.

1 Like