Memory resources: timed out, dumping permit diagnostics

HI, I am seeing a lot of these timeouts in my cluster. we have allocated scylla 350gb memory of 376gb. Scylla version 5.2.16.

[shard 28] reader_concurrency_semaphore - (rate limiting dropped 3 similar messages) Semaphore _read_concurrency_sem with 95/100 count and 2548270/104354283 memory resources: timed out, dumping permit diagnostics:
                                                     permits        count        memory        table/description/state
                                                     94        94        1M        system.local/shard-reader/inactive
                                                     1        1        597K        abc.xyz/data-query/active/used
                                                     133        0        388K        system.local/multishard-mutation-query/active/unused
                                                     10        0        0B        efg.hij/data-query/waiting
                                                     2        0        0B        klm.nop/data-query/waiting
                                                     9        0        0B        qrs.tuv/data-query/waiting
                                                     6        0        0B        wxy.z12/data-query/waiting
                                                     550        0        0B        345.abc/data-query/waiting
                                                     483        0        0B        system.local/shard-reader/waiting
                                                     101        0        0B        abc.123/data-query/waiting
                                                     1        0        0B        system.peers/shard-reader/waiting
                                                     1091        0        0B        system.local/shard-reader/evicted

                                                     2481        95        2489K        total

                                                     Total: 2481 permits with 95 count and 2489K memory resources

Looks like something is scanning system.local a lot of times. What driver are you using? Are you connecting a lot of new clients?
This table is read by drivers upon connecting to the scylla node and we have patched them to use a single partition query, instead of a full scan, because the full scan is much more expensive. If you are using an out-of-date driver, updating it might solve this issue.

We are using com.scylladb:java-driver-core-shaded:4.17.0.0 driver.

We connect to 250 clients but they connect only once.

This is the toppartitions output from one of the nodes.

READS Sampler:
  Cardinality: ~256 (256 capacity)
  Top 10 partitions:
	Partition                                                 Count       +/-
	(system_auth:roles) user1                               18161       115
	(ley_space1:table1) user2     15141        63
	(system_auth:role_permissions) user1                     6056       115
	(system_auth:roles) user3                               5493         8
	(system_auth:roles) user4                                  5097        38
	(system_auth:roles) user5                       4810        77
	(system_auth:role_permissions) user3                    1844         8
	(system_auth:role_permissions) user5           1770        77
	(system_auth:role_permissions) user4                      1703        38
	(system_auth:roles) user6                          851       117

could you check in monitoring, in “Scylla CQL” dashboard we have " Client CQL new connections by Instance" and “Client CQL connections by Instance”. Maybe your driver drops and reconnects for some reason?