Originally from the User Slack
@scyllero: Hi, new user here. Are scylladb multi table batch updates atomic and isolated within the same node?
@Botond_Dénes: No, multi-table batches are never atomic and are not isolated.
Only batches that affect the same partition of the same table are atomic.
@scyllero: “… dml statements to achieve atomicity and isolation when targeting a single partition or only atomicity when targeting multiple partitions”
I got that from Cassandra docs for batch command.
So I thought atomicity is always guaranteed (through logged), but atomicity AND isolation only if the batch comprises a single partition of a single table.
Can you confirm your second answer please, that changes a lot for me
@Botond_Dénes: Multi-partition batches (from the same table) are only atomic if they happen to all be owned by the same replica. And this is very hard to guarantee and thus it is best to consider it non-atomic.
@scyllero: Ok, so basically there is no way that I can create , let say and order along their items, ATOMICALLY.
I mean, this wouldn’t work atomically:
begin batch
insert into orders… (order_id = 1)
insert into order_items… (order_id=1)
apply batch
@Botond_Dénes: It depends, if all the content of the batch refers to a single partition, then it is atomic. Otherwise, you may observe a state, where some of the statements already applied but others didn’t yet.
You can de-normalize your data-model and have everything you want to update in a single table and partition. Then you can make the updates atomic.
@scyllero: Im not getting why it dependes. In that example we are talking about two partitions from two different tables
partition order_id=1 for orders AND
partition order_id=1 for order_partition
@Botond_Dénes: Yes, I noticed that after I typed my answer.
In that case indeed there is no way to make it atomic.
@scyllero: Done, thx