Scylla DB use case to replace very slow RDBMS Common Table Expression

I currently employ a relational database CTE that iterates over parentage information to find all ancestors or all descendants of a given entity. Even on a small (150000) sample, it can take over an hour to return a result. I want to query 10 to 20 million entities and return a result in less than a second. I cannot get anywhere close by iterating in response to a user query. I think Scylla may come to the rescue by storing the mass of information produced by iterating as entities are added. Then user queries don’t have to wait for the iteration. Is this possible with Scylla?

I am considering the following approach:
Each group of inter-related entities will have its own table (probably needing a Compound Primary Key?):

CREATE TABLE relations.group (
 entity_id INT,
 ancestor_id INT,
 measure DECIMAL,
PRIMARY KEY (entity_id, ancestor_id)
WITH CLUSTERING ORDER BY (ancestor_id DESC);

If I understand it right, this table will in some cases have as many as 8 million rows. Each row will potentially have 1000s of columns, each representing an ancestor associated with a high precision measure.

If it didn’t hurt performance or functionality, I would possibly add a few columns of entity metadata, but the functionality really just depends on this core.

A table will need to service the following types of query:

Ancestor query:

SELECT *
FROM relations.group
WHERE entity_id = 101010
ORDER BY measure DESC;

I’ll want to paginate the result set in which I hope to show every ancestor of the entity in descending order of the measure.

Descendant query:

SELECT measure
FROM relations.group
WHERE ancestor_id = 101010;

If I added other columns about the entity, I would order the return set by one of these additional columns. Otherwise I would post process both of these SELECT results with additional metadata from the relational database.

Finally, I would ideally input new entities with something like a relational database SELECT INTO statement, but it seems that can’t be done in CQL? In which case I was wondering if something along the lines of the following can be done on the database side, before running the INSERT:

SELECT ancestor_id, sum(measure)
FROM relations.group
WHERE entity_id IN (111111, 100000);

The idea being that each ancestor in common with the ancestors chosen would have its measure column added together.
That aside, one way or another, immediate ancestors would be selected and the common measure among their ancestors would be aggregated to put together the ancestors and associated measures for the new entity.

New entity creation:

INSERT INTO relations.group (entity_id, ancestor_id, measure)
VALUES (124234, 124344, 0.002543251651324851654654654654684351643435464684684656351)
IF NOT EXISTS;

With potentially thousands of ancestors per entitiy, there is doubtlessly a way to upload new entities more efficiently, I just haven’t been able to track it down yet.

Overall

I anticipate users being able to query among 10’s of millions of entities each with 1000’s of columns, and return associated measure info for either:

  • one row with all columns (ancestor query), or
  • one column with all associated rows (descendant query).

They also need to be able to add new entities with data for thousands of columns made up by aggregating a small number (at least two) of entities.

As long as results can be sorted and paged, no query should be onerous. However, the sort may occationally have to take into account up to 8 million rows in a descendants query.

My initial test case will be smaller, with about 200,000 entities each having 100’s to 1000’s of columns and running on a single virtual private server.

Obviously, iterating first generates a lot of data. I’m hoping ScyllaDB can handle it without keeping my users waiting long.

Relational DB’s definitely cannot handle the iterate per request approach in any kind of reasonable time at any necessary scale.

I think ScyllaDB should be able to handle this amount of data just fine, although thousands of columns can potentially cause problems, if this is towards the higher end (10K) or more.

But, to be able to query your data fast, you either have to reshape your data or rethink your queries.

Your ancestor query is ordering by a regular column, which is not supported by CQL. Queries can only be ordered by clustering columns. And this only works for single partition queries.

Your descendant query is a filtering query, which means it will do a full scan on the entire table, which will not be fast.

So challenging!

The descendent query is the basis upon which I would consider switching to wide-column store. Iterating with relational or graph databases is too slow, will it be that iterating first and storing in a wide-column store database is too slow as well?

What if the number of rows in a table was constrained and the total number of tables increased, so that instead having one of the tables as large as 10 million rows, it was broken into tables no larger than about 200,000 rows? Would doing a descendants search of a smaller table be likely to return a result in 1.5 seconds or less? If not, how small (number of rows) would a table need to be to return a result in that timeframe (assuming up to 10000 columns, no row having more than about 2000 columns)?

Worst case

Then, in the worst case I would have some tables containing a couple hundred thousand rows in which the column queried may return a ‘measure’ value for every row. It sounds like I would need to be able to find other means to store the result (after waiting for ‘will not be fast’) so that I could sort by ‘measure’ value (‘measure’ not being a clustering column).

How slow?

I’m not sure how long ‘will not be fast’ might take in this worst case scenario?

Then in addition, it would take time and resource to end up with a sorted list by ‘measure’ outside of ScyllaDB, since measure isn’t a clustering column.

Number of columns

For the ancestor query, I don’t expect any one row to have much over 1 to 2 thousand columns (mostly under 500 columns), but if the total unique columns for a table are basically restricted to less than 10000, that is likely to become an issue as well.

Too hard?

I feel like I’m running out of options if wide-column stores can’t handle this use case. Structurally, it seems like the ideal database construct for this type of problem, but then so did graph databases…

Note that ScyllaDB is note really a wide-column DB anymore. You need to declare up-front all your columns that a row will have, in a schema. Then you are allowed to read/write only columns that appear in said schema, and only according to the type declared in the schema.

Having many smaller tables instead of few larger ones would probably be counter-productive. Instead, you can split the entire range and query token-ranges in parallel. See Efficient full table scans with ScyllaDB 1.6 - ScyllaDB on how that works. Also look into How ScyllaDB Distributed Aggregates Reduce Query Execution Time up to 20X - ScyllaDB, it might help.

Overall, the only way to find out is to set-up a test-cluster and start experimenting.

Thank you Botond, you have been very insightful.

In my use case, the potential columns will continuously evolve and cannot be declared up-front, though their type can. I misunderstood the compound primary key as being able to provide for dynamic columns.

The compound primary key allows for many rows within a partition. The column count is fixed in the schema. This can be somewhat relaxed by using collection columns, but note that it is not recommended to use large collections (more than a few dozen elements), for performance reasons.

Good to know. This whole exercise (of trying to apply wide-column db concepts to my situation) has been quite useful, I think. Pivoting out my many columns into many rows per entity using a compound primary key and clustering index may well allow for a RDBMS solution after all!