Originally from the User Slack
@Varun_Nagrare: Hi all, recently I’m facing a consistency error while reading data with Spark. I repaired my 3 node cluster with replication factor 3 but still it’s giving the same error. How to avoid this error?! Please help!!!
@dor: What’s the error msg?
@Varun_Nagrare: @dor Below is the error I’m getting:
com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded)
Why would this occur even after repairing the cluster?
@dor: It’s not a consistency error but a timeout. If repair was finished already, could be it brough more sstables and now the nodes need to consolidate it. It depends what the machine is doing now
You can run slow query tracing and see how many sstables are read for this query
@Varun_Nagrare: Please tell how to do slow query tracing?
@dor: it’s a cql feature, described in the docs
@Varun_Nagrare: Also when checking the syslog I found out the below:
scylla: [shard 0] mutation_partition - Memory usage of unpaged query exceeds soft limit of 1048576 (configured via max_memory_for_unlimited_query_soft_limit)
@dor: That can certainly help - unpaged queries aren’t good
There is an advisor screen in grafana, check how many queries are unpaged
@Varun_Nagrare: I’m reading the data using PySpark. So how to find if a query is unpaged? Since I don’t have grafana installed for monitoring it, is there any alternative for it?
@dor: It’s time to start read the docs and and run grafana…
@Varun_Nagrare: Yes I know
. Just wanted to solve this asap. Anyways thanks for the help. I’ll go through the docs.