Cqlsh keeps reporting errors and timeout when querying empty tables

I found a very strange phenomenon during the test. Even when I query an empty table, it actually takes a long time. I have already performed a compact on this table.

]# du -sh /var/lib/scylla/data/alternator_test_5
0       /var/lib/scylla/data/alternator_test_5

]# nodetool flush alternator_test_5
]# nodetool compact alternator_test_5
]# ll
total 0
drwxr-xr-x 2 root root 10 Jul  5 02:32 staging
drwxr-xr-x 2 root root 10 Jul  5 02:32 upload

cqlsh> SELECT * FROM alternator_test_5.test_5 LIMIT 1;
ReadTimeout: Error from server: code=1200 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out for alternator_test_5.test_5 - received only 0 responses from 1 CL=ONE." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'ONE'}

There are other tables with a large number of records in the cluster, but it should not affect the query of empty tables.

Please provide as many details as possible so people can help you. This includes things like the ScyllaDB version, hardware, OS, and the data model you’re using.

This is a known issue in ScyllaDB. Scans on sparse (nearly empty) tables often time out.
The reason for this is in the vnode architecture. Every database cluster has number_of_nodes * 256 vnodes.

When scanning a table, each one of these has to be scanned individually, in ring order, to read the entire content of the table.

With a dense table, that has a lot of data, the query will quickly fill up the page and return to the client, maybe the page will be filled after reading 1-2 vnodes only.

In a sparse table, it may take hundreds or thousands of vnodes to fill a page. In the meanwhile, ScyllaDB will not return to the client, because it wants to fill the page first, so the query will end up timing out.

This can be worked around by increasing the range_request_timeout_in_ms in ScyllaDB configuration, and increasing the timeout for these queries on the client side as well.