Hi, are there any plans for TRIGGERS of any kind on the ScyllaDB Roadmap?
Also how about functions that go beyond UDF/UDA, to e.g. read/write into other tables?
References
Hi, are there any plans for TRIGGERS of any kind on the ScyllaDB Roadmap?
Also how about functions that go beyond UDF/UDA, to e.g. read/write into other tables?
References
Hi
No, there is no plan for triggers in the short-mid term. See
Note that UFA/UDA are close but not yet production-ready.
Similar to the DynamoDB reference you included, one can use the ScyllaDB CDC feature to look for an event and run application-level logic.
Tzach
Thanks @tzach for your reply. One use case Iāve had in mind actually would have been to build sort of an outbox pattern. Since itās not recommended/applicable to enable CDC for a high number of tables it would be interesting to create a single āoutboxā table with CDC enabled, and push events into this table upon writing to many other tables (via TRIGGERS).
But maybe thatās actually āputting the cart before the horseā. I guess what Iām asking for or planning to workaround is having a DB-wide WAL?!? Would that be something worth exploring?
A single outbox table is definitely the way to go - but it is not clear exactly why you would need a trigger. In fact, there are concerns to that, eg: tables can have a different partition (thus making your updates to end up on different set of replicas), or even in different DCs (should the table in keyspace live in a different keyspace), all of which could introduce an availability concern and make your trigger to fail (or worse, accumulate and over time degrade the entire system performance).
What is feasible is to write to both your primary datastore and to an outbox table, and then consume the tail (with CDC or not) of the outbox table overtime.
However, I see where you are coming from given Are BATCH insert/update into different tables logged + "safe"? (which is ultimately tied to scylladb/scylladb#13390), but you will ultimately need to decide whether you will batch both entries (to different tables) altogether, or concurrently write.
For batching, this will incur an entry on the batchlog, and the updates will be done asynchronously. So it may happen that you consume an event from the outbox table before a record gets persisted in the main service table (though that should be rare), but it doesnāt mean that the main table record is lost, as thereās a built-in retry mechanism to ensure that all entries eventually are applied.
For parallel writes, you would need to work around the old fashioned ādual writeā problem, so indeed not fun.