Bootstrap repair of us-west nodes takes time in multi-dc cluster

Hi,

We have a Multi-DC ScyllaDB cluster. With 10 nodes in AWS us-east-1 and 10 nodes in us-west-2. We use scylla-ansible-role to bring up new clusters. We have observed that bootstrap of nodes in usw2 takes lot of time than in use1. Looks like table repair during bootstrap is taking long time. Any pointer on how to debug and fix this.

Thanks,
Swaroop

Please provide the relevant logs that show what is taking time.

Do the nodes in different DCs have differing shard count? Are you using RBNO based bootstrap?

Here are few lines from logs. Looks like the repair during bootstrap takes a lot of time in us-west-2. In US-EAST-1 it took 13 sec where as in US-WEST-2 it took about 27 min.

in US-EAST-1:
Feb 22 13:06:07 x.y.z.ec2.internal scylla[2966]: [shard 0:stre] repair - bootstrap_with_repair: started with keyspaces={system_traces, system_distributed_everywhere, system_distributed, system_auth}, nr_ranges_total=9179
Feb 22 13:06:20 x.y.z.ec2.internal scylla[2966]: [shard 0:stre] repair - bootstrap_with_repair: finished with keyspaces={system_traces, system_distributed_everywhere, system_distributed, system_auth}

in US-WEST-2:
Feb 22 13:07:34 a.b.c.ec2.internal scylla[3055]: [shard 0:stre] repair - bootstrap_with_repair: started with keyspaces={system_traces, system_distributed_everywhere, system_distributed, system_auth}, nr_ranges_total=9421
Feb 22 13:34:28 a.b.c.ec2.internal scylla[3055]: [shard 0:stre] repair - bootstrap_with_repair: finished with keyspaces={system_traces, system_distributed_everywhere, system_distributed, system_auth}