The ScyllaDB team is pleased to announce the release of ScyllaDB 2026.1, a production-ready ScyllaDB Long Term Support (LTS) Major Release.
More information on ScyllaDB’s Long Term Support (LTS) policy is available here.
Highlights of the 2026.1 release include:
-
ScyllaDB Vector Search now supports filtering and quantization capabilities.
-
Tablets now support counters, fully bridging the feature gap between Tablets and vNodes.
Upgrade Paths
We’ve updated our upgrade policy to allow upgrades to the next major LTS version from any minor version within the previous major release. You can now upgrade to 2026.1 from any earlier 2025.x release.
See the Upgrade Guide from ScyllaDB 2025.x to ScyllaDB 2026.1.
Relevant Links
New Features
Vector Search: Filtering and Quantization Support
ScyllaDB now includes enhanced filtering and quantization support, enabling more efficient query execution and optimized storage of large vector dataset with reduced RAM footprint.
The new filtering capabilities allow users to refine result sets with greater precision querying the vector data alongside other metadata attributes, while quantization reduces memory footprint and improves processing speed for vector‑based operations. Together, these enhancements deliver faster responses, lower resource consumption, and a smoother experience for data‑intensive applications.
Vector Search is currently available in ScyllaDB Cloud only. Filtering and quantization will be available in ScyllaDB Cloud in upcoming service updates.
For more information on Vector Search and its latest enhancements, see the ScyllaDB Cloud documentation:
Tablets: Counters Support
Tablets now support counters, completing their functional alignment with vNodes. With this addition, tablets provide the same capabilities as vNodes, while delivering improved performance, lower total cost of ownership (TCO), and faster cluster resizing for both vertical and horizontal scaling.
Counter support on tablets enables this functionality on X Cloud clusters in ScyllaDB Cloud.
Incremental Repair
Incremental repair introduces a more efficient approach to the repair process, a critical maintenance operation responsible for detecting mismatches between replicas on different nodes and resolving them.
With incremental repair, only the data changes introduced since the last repair are processed. This significantly reduces repair time and resource consumption. As a result, the impact on user workloads is minimized, leading to improved data consistency and faster purging of expired tombstones, which in turn helps reduce latency in delete-heavy workloads.
With the introduction of incremental repair, three repair modes are supported:
-
full– Performs a full repair across all SSTables but records repair metadata for future incremental runs. Serves as a baseline for enabling incremental repair. -
incremental– Repairs only unrepaired SSTables, using metadata recorded from earlier runs to skip already repaired data. This mode reduces data movement and shortens repair duration.. -
disabled– Completely disables the incremental repair logic for this repair operation. The repair runs as a classic, non-incremental repair and does not read or update any incremental repair status markers. As a result, any subsequent incremental repair will also run as a full repair, since no previous repair history is preserved.
Incremental repair provides dramatic speedups over full repair: ~100× faster when 1% of data changed, and ~20× faster when 5% of data changed:
See Incremental Repair in the ScyllaDB documentation for details.
Incremental repair is available only on clusters using tablets-based data distribution (the default on X Cloud clusters in ScyllaDB Cloud).
Additional Improvements
Alternator
- The Time-to-Live (TTL) expiration logic has been enhanced to ensure consistently correct data expiration when running with tablets, even during partial node failures. This was achieved by refining the TTL logic and optimizing the locator function.
scylladb#28803
Compaction
- The overall stability of compaction has been improved by resolving a potential hang in the compaction manager. The logic ensures that
maybe_wait_for_sstable_count_reduction()correctly resolves, preventing prolonged stalls.
scylladb#28801
Cloud
-
Added a limit to the S3 client for multipart upload concurrency to improve stability during backup and restore.
scylladb#28668 -
The AWS error logic was updated to correctly handle all restartable nested exception types, increasing the resilience of cloud-based operations against transient errors. scylladb#28345
-
Google Cloud Platform (GCP) operations now correctly handle a 429 (Too Many Requests) error code by incorporating an exponential backoff mechanism. This prevents the cluster from overwhelming GCP services and ensures long-running cloud operations complete successfully.
scylladb#28724 -
Refinements were implemented to ensure reliable and correct backup, restore, and snapshot operations with Google Cloud Storage (GCS). This includes correcting how the storage interface URL encodes object names and ensuring the object storage pager accurately detects the end of a page stream.
scylladb#28399 -
Reliability and correctness for large S3 file uploads have been ensured by optimizing the multipart upload logic. The
calc_part_sizefunction was corrected to guarantee that multipart uploads always adhere to the 10,000-part maximum limit enforced by AWS, preventing upload failures for large objects.
scylladb#28697
CQL
- When using
DESCRIBE TABLES/KEYSPACE/SCHEMA, internal Paxos tables were incorrectly included in the output. This is fixed by explicitly hiding internal Paxos tables, which provides a cleaner schema view for users.
scylladb#28183
Monitoring
-
Excessive INFO level logging related to hints during topology operations was adjusted, resulting in more manageable log files and easier log analysis.
scylladb#28301 -
Improved logging during the restore process.
scylladb#28683 -
The load statistics refresh logic has been enhanced to gracefully handle dropped tables, ensuring accurate and continuous reporting of load statistics.
scylladb#28471 -
The race condition that occurred when calculating the sum of tablet sizes in load statistics was fixed.
scylladb#28729
Networking
-
The client connection factory underwent a major refactoring to improve resilience, incorporating DNS resolution improvements, the introduction of Time-to-Live (TTL) checks, and retry logic. This comprehensive update ensures that clients can reliably establish connections and handles transient DNS or network issues more robustly.
scylladb#28405 -
Enhanced server responsiveness and connection management were achieved by optimizing the transport layer semaphore logic. This refinement ensures that the code consumes only the initially taken semaphore units during interleaved read/write operations, preventing semaphore depletion and connection starvation.
scylladb#28716
Raft
-
The cluster’s stability during control plane actions has been significantly increased by refining the Raft control plane logic. This enhancement prevents the chaining of multiple consecutive group 0 leader crashes, eliminates potential null pointer access during topology operations, and makes certain Raft topology assertions non-crashing.
scylladb#27987 -
Stability during Raft-based topology changes and administrative commands is improved by eliminating a potential source of crashes associated with a use-after-free condition in the Raft topology command handler.
scylladb#28772 -
The Raft topology was updated to generate notifications about released nodes only once, reducing redundancy.
scylladb#28612 -
Cluster metadata stability during keyspace drops is ensured by resolving a race condition that could lead to concurrent modification of group 0 (the cluster’s control plane). This refinement maintains proper cleanup and cluster metadata integrity during topology changes.
scylladb#25938
Reliability
-
Operators can now safely and reliably engage maintenance mode. This enhancement ensures the storage service correctly sets up topology and skips read replica validation when in maintenance mode, preventing a potential crash.
scylladb#27988 -
A timeout issue when writing to the
system.batchlog_v2table in mixed-version clusters was stabilized. The fix re-adds batchlog version 1 support specifically for mixed clusters, ensuring reliable batch write operations during the cluster upgrade process.
scylladb#27886 -
A shutdown deadlock condition in the commitlog was fixed where a waiter could get stuck unsignalled for segments if the replenish loop exited with an empty queue while the shutdown flag was set.
scylladb#28693
Repair
-
Repair operation stability has been enhanced by correcting an issue with the read-write lock (
rwlock) incompaction_stateand its lock holder lifecycle, which resolves a potential Segmentation fault during the operation.
scylladb#27365 -
The auto-repair scheduler was updated to skip auto repair for tables using a Replication Factor (RF) of one, reducing unnecessary system load and improving efficiency.
scylladb#28714
Security
- The Replicated Key Provider is deprecated and will be removed in a future ScyllaDB release.
scylladb#27270
Key Changes Since 2025.1
This section highlights the most important changes introduced between 2025.1 and 2026.1 for users upgrading directly from 2025.1. It provides a concise, cumulative view of key updates and required actions across these releases.
Alternator
-
Per-table metrics and TTL support for tablets – Alternator now supports per-table metrics, allowing better observability of workload performance. Tables using tablets can also define time-to-live (TTL) attributes, enabling automatic expiration of data.
-
Improved DynamoDB compatibility and performance – Alternator now aligns more closely with DynamoDB’s
getRecordsbehavior, ensuring consistent results. Performance is improved through caching of parsed expressions, yielding observed single-node throughput gains of 7–15% depending on query complexity.
Backup
- Native backup production-ready – Introduced experimentally in 2025.2, native backup now delivers up to 15× faster S3 uploads compared to the previous rclone-based approach. Backup scheduling and metadata management remain via ScyllaDB Manager, while data uploads happen directly from the ScyllaDB server. This improves backup performance without impacting user queries.
CQL
-
Default Replication Strategy – The
CREATE KEYSPACEstatement now works without specifying a replication strategy. By default,NetworkTopologyStrategyis used if no strategy is provided. -
Default Replication Factor (RF) – When using
NetworkTopologyStrategywithout specifying a Replication Factor, the system automatically sets it equal to the number of racks with at least one non-arbiter node.
The above updates allow you to create keyspaces without a WITH clause. This makes the following syntax valid: CREATE KEYSPACE ks WITH REPLICATION { };
See the documentation for CREATE KEYSPACE for more details and examples.
Deployment & Platform Support
-
Broader AWS instance support (i7i/i7ie and new ARM-based families) and GCP Z3 instances.
-
Added Support for RHEL 10.
-
Removed support for Ubuntu 20.04.
Guardrails & Reliability
-
Tablets-only keyspace guardrail – Administrators can now enforce tablets-only keyspaces, preventing unsafe configurations.
-
Topology guardrail – Prevents unsafe DC or rack changes on bootstrapped nodes, ensuring consistent tablet operation and vNode safety.
-
Out-of-space guardrail – Nodes reaching 98% storage utilization will reject user writes while allowing streaming and tablets migrations, helping prevent full-disk failures and enabling faster recovery through scaling or node replacement.
Raft
-
Majority-loss recovery procedure – Provides a safe method for recovering from Raft group 0 majority loss in tablet clusters.
-
Automatic voter management – Limits the number of Raft voters (maximum of 5 per cluster) and dynamically promotes/demotes nodes to maintain quorum with minimal overhead.
Security
-
Improved audit logs – Syslog output is now machine-parseable, allowing easier automated monitoring and compliance checks.
-
Azure Key Vault support – Adds a new provider for encryption-at-rest keys, complementing existing AWS KMS and GCP KMS support.
Storage
-
Storage ZSTD + dictionary compression – New compressor implementations use dictionaries to improve the compression ratio. The dictionaries are shared across SSTables and across all nodes in the cluster. The system automatically generates new dictionaries when it sees a gain in compression ratio.
The new compression is NUMA aware. It distributes the dictionaries across all shards in a node, with one copy per NUMA node. Performance loss is minimized due to cross-NUMA-node memory accesses.You can either
CREATEorALTERtable to use the newsstable_compressionoption:ALTER TABLE keyspace.table WITH compression = {‘sstable_compression’: ‘ZstdWithDictsCompressor’};Below is a comparison of the storage used for a dataset from Tutorials and Example Datasets | ClickHouse Docs:
-
Optional trie-based SSTable index and LZ4 + dictionary compression – Introduces a new trie-based SSTable index format (
ms) for faster reads and more compact indexing. Existing SSTables remain compatible, and LZ4 + dictionary compression improves storage reduction for new installations.
The trie-based index is not enabled by default. It can be enabled by setting thesstable_formatparameter in the scylla.yaml file toms.
Tablets
-
Expanded tablets functionality – Tablet tables now support:
-
Materialized Views
-
Secondary Indexes
-
Lightweight Transactions (LWT)
-
Change Data Capture (CDC). CDC can be enabled on a table at creation or afterwards:
At creation:
CREATE TABLE my_table ( pk int, ck int, value text, PRIMARY KEY (pk, ck) ) WITH cdc = {'enabled': true};After a table is created:
ALTER TABLE my_table WITH cdc = {'enabled': true};
-
-
Capacity-aware load balancing – Tablet load balancing now accounts for node capacity, preventing nodes from reaching 100% utilization while others have free space.
-
Cluster-level repair – nodetool cluster repair enables cluster-wide data synchronization for tablets, using a new admin REST API. ScyllaDB Manager automatically invokes this for cluster operations.
Vector Search
-
Vector type support – Adds fixed-size vectors to CQL, enabling storage of AI-related vector data.
-
Vector Search (Cloud-only) – Introduced in 2025.4, ScyllaDB Cloud added native vector indexing and similarity search for AI workloads. In 2026.1, Vector Search was enhanced with filtering and quantization support.
See the 2025.x release notes for details:

