Originally from the User Slack
@Ahmed: can anyone guide me for enabling tablet for my existing keyspace upgraded from 5 to 6.2 tried sstableloader it’s give unsupported SSTable format or version
@avi: It’s not possible to help with such a vague description of the problem
@Ahmed: Hi Avi,
Thanks for your response. Let me clarify:
• I’ve migrated my cluster to version 6.2 from 5.x.
• I have a keyspace with tablet
disabled, and its size is approximately 3TB.
• My goal is to enable tablet
for this keyspace. However, as far as I understand, I can’t simply use an ALTER
command to achieve this.
• I’ve attempted to use sstableloader
for this migration, but I encountered an issue where it reports “unsupported SSTable format or version.”
I’ve searched for documentation on enabling tablet
for existing keyspaces post-migration but haven’t found anything relevant. Could you guide me on the proper procedure for enabling tablet
in this scenario?
Thanks in advance!
@avi: First, you need to create an empty keyspace with tablets enabled.
To migrate data, it’s best to use nodetool refresh --load-and-stream
, see https://opensource.docs.scylladb.com/stable/operating-scylla/nodetool-commands/refresh.html
Nodetool refresh | ScyllaDB Docs
@Ahmed: version 6.2
enable_tablets: true in scylla.yaml
CREATE KEYSPACE IF NOT EXISTS test_new WITH replication = {'class': 'org.apache.cassandra.locator.NetworkTopologyStrategy', 'replication_factor': '2'} AND durable_writes = true AND tablets = {'enabled': true};
error
ConfigurationException: Tablet replication is not enabled
@avi: Set enable_tablets
in scylla.yaml and perform a rolling restart
@Ahmed: Issue with nodetool refresh
Command
I am encountering an issue when running the following command:
nodetool refresh --load-and-stream main_new table1
The error message is as follows:
Error executing POST request to <http://localhost:10000/storage_service/sstables/main_new> with parameters {"load_and_stream": "true", "cf": "table1"}: remote replied with status code 500 Internal Server Error:
Failed to load new sstables: seastar::rpc::stream_closed (rpc stream was closed by peer)
Observations:
- Incomplete Data Insertion:
◦ The error occurs after inserting approximately half of the dataset.
- Dataset Size:
◦ The dataset is large, and the issue appears to happen inconsistently at random rows.
- Previous Testing:
◦ This similar setup with just 100k rows was tested in a staging environment with no issues.
Request for Help:
I suspect the issue might be related to invalid or corrupt data in the dataset, but the error message provides no details on the problematic row(s).
• How can I pinpoint the source of the issue?
• Is there a way to configure the system to skip invalid rows instead of quitting entirely?
I’d appreciate any guidance or tools that can help debug and resolve this issue, especially for large datasets.
some additional info we copied 56164 files from existing table to new table upload dir now upload dir have only 15434 left.
@avi: Check the logs for the actual failure. It’s better to load batches of a smaller number of sstables at a time.
@Ahmed: i tried importing in multiple chunks and it’s worked thanks
I have been encountering a persistent issue where the logs frequently show errors like the following:
[shard 0:main] storage_proxy - exception during mutation write to 192.168.100.1: std::runtime_error (Key size too large: 70754 > 65535)
@avi: Keys are limited to 64k. This would usually be caught at the CQL layer. But maybe the sstables you imported contained illegal keys.
You can use the scylla sstable dump-data
command to search for these large keys