How many record in system_distributed.cdc_streams_descriptions_v2?

I use a one dc three node cluster. num_token is 256. smp is 2.
this cluster has 768 vnode. I check it by nodetool ring.

cqlsh> select * from system_distributed.cdc_generation_timestamps ;

 key        | time                            | expired
 timestamps | 2024-03-13 12:16:31.242000+0000 |    null
 timestamps | 2024-03-13 12:16:31.170000+0000 |    null
 timestamps | 2024-03-13 12:14:49.147000+0000 |    null

(3 rows)
cqlsh> select count(1) from system_distributed.cdc_streams_descriptions_v2 where time='2024-03-13 12:16:31.242';


(1 rows)

why 512?
other thing

root@a87e47faf412:/# nodetool status
Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
--  Address          Load       Tokens       Owns    Host ID                               Rack
UN  444 KB     256          ?       0f5e0d2c-e374-41a1-8fb7-a50ebaf4c1bd  rack1
UN  852 KB     256          ?       1ce8ee31-bb42-42e2-b4e6-bfa86c4d5d92  rack1
UN  780 KB     256          ?       1f8e4851-9a13-4086-ad31-6e72492f9abb  rack1

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
root@a87e47faf412:/# nodetool ring|grep Normal|wc -l
root@a87e47faf412:/# scylla --version

The last two timestamps, which are less than one second apart, indicate that you did not follow correct node bootstrap procedure. You bootstrapped the second and third node concurrently, which is something that Scylla does not correctly support at the moment (it will only in 6.0 with “Raft based topology”).

The documented bootstrap procedure instructs to boot nodes sequentially, i.e. only once a node becomes UN (“UP NORMAL”), it is safe to start booting subsequent node.

Right now to fix the situation you can use nodetool checkAndRepairCdcStreams to prompt Scylla to create a new CDC generation with the correct number of streams.

1 Like