INSERT query execution time suddenly slows down

INSERT query execution time suddenly slows down when the number of columns is approximately 235 or more.
I would like to know whether this is a problem that can be solved by data modeling or other techniques, or whether it is a performance limitation of ScyllaDB.
If the problem can be solved, I would appreciate it if you could let me know how to do it too.

The composition

  • Node: One (for now)
  • The data type
    • Primary key: text
    • 6 columns common to all tables: text
    • The rest columns: double

The time of INSERT query execution

  • Table A (232 columns total): approx. 0.0007 sec / time.
  • Table B (235 columns total): Average: approx. 0.0095 sec / time [*1].
  • Table C (238 columns total): approx. 0.040 sec / time.
  • Table D (3477 columns total): approx. 0.045 sec / time.

[*1] Most times it takes only 0.0007 sec, but sometimes it takes 0.04 sec.

ScyllaDB is not desgien for 3000 columns per table
(See Limits | ScyllaDB Docs)
I suggest revisiting the data modeling.
See here for Wide Table Design Vs. Narrow table design
Data model | ScyllaDB Docs

Thanks alot for your info.
I see that there are CQL limits.

  • I created a table with more than 3000 columns and it seems ok except query execution speed. Does the “CQL limits” mean that the behavior is not guaranteed?
  • Also, although I think 230 columns table is in limit, is it just normal for INSERT queries to take longer to execute?

Column count has an effect on how expensive it is to merge writes – which translates to time it takes to commit the write to the memtable, as well as to commitlog. You can use UDT:s to improve your latencies, see this blog post for more details: If You Care About Performance Use UDT's