How to verify that a select is using SERIAL consistency

If i have a gocql query like

err = tx.session.Query(fmt.Sprintf("select col, ts, val from \"%s\" where key = ? and ts > ? and col = 'w' order by ts asc limit 1", tx.table), key, tx.readTime.UnixNano()).SerialConsistency(tx.serialConsistency).Scan()

but the docs claim:

// SerialConsistency sets the consistency level for the
// serial phase of conditional updates. That consistency can only be
// either SERIAL or LOCAL_SERIAL and if not present, it defaults to
// SERIAL. This option will be ignored for anything else that a
// conditional update/insert.

that sounds to me like they wont bother applying that consistency level to a select (it will be ignored). Is there any way to debug this so I can verify that select used serial consistency, rather than a non paxos consistency?

A subsequent question could be: Does using the .SerialConsistency apply to select statements in gocql?

It does not seem like this is simple to verify.

You need to set Consistency to SERIAL to run SELECT with SERIAL consistency. You can verify it with LWT metrics in the dashboards and in rest API - specifically, all cas_* metrics will bump once you execute a SELECT with SERIAL consistency, as it runs a full blown Paxos round with EMPTY mutation, so essentially SELECT in SERIAL mode is a write.

2 Likes