Originally from the User Slack
@Chanaka_LiyanarachchiChanaka_Liyanarachchi**:** I upgraded Scylla from version 4.1 to 5.2, and then to 5.4. Although consistent schema management is enabled by default in 5.4, it was not active in my cluster. I then manually added consistent_cluster_management: true to the scylla.yaml file and performed a rolling restart of all nodes. However, it still appears that consistent schema management is not enabled.
Logs:-
[shard 0:stre] raft_group0 - finish_setup_after_join: SUPPORTS_RAFT feature not yet enabled, scheduling upgrade to @avitart when it is.
@avi**:** Maybe you decommissioned a node recently? The cluster might remember the old node.
Look at system.peers.
You might not be using prepared s@Chanaka_Liyanarachchiat@aviments correctly.
@Chanaka_Liyanarachchi**:** @avi
I was upgrading a 3-node QA Scylla cluster that was originally running version 4.1 on Ubuntu 18.04. I performed a sequential upgrade of Scylla from 4.1 → 4.2 → 4.3 → 4.4 → 4.5 –>4.6 → 5.0 → 5.1.
After reaching 5.1, I migrated the OS to Ubuntu 22.04 by replacing the nodes. I then continued upgrading Scylla from 5.1 → 5.2 → 5.4.
It was after completing the upgrade to 5.4 that this error occurred.
I checked the system.peers table and could only see the currently active nodes. In the end, I was able to resolve the issue by performing a full cluster restart. Since this is a QA environment, we were able to afford that approach. However, this would not be an ideal solution for a prod@aviction environment if the same issue occurs.
@avi**:** It’s possible that a second rolling restart was needed. The first rolling restart gets the cluster to notice consistent_cluster_management: true, and the second breaks the connections and re-establishes them so the features are re-negotiate@Chanaka_Liyanarachchi. I’m not 100% sure, too much time has passed@Chanaka_Liyanarachchi
@Chanaka_Liyanarachchi**:** Th@Chanaka_Liyanarachchinks i’ll try and let you know
@Guy: Hey @Chanaka_Liyanarachchi did this solve the issue?
@Chanaka_Liyanarachchi**:** Not really, but i had to stop the all nodes and then start it one by one (full cluster restart) then it worked, but i can’t do it on prod though
@avi**:** Suggest to reproduce and try a second rolling restart (perhaps via docker images, you don’t need a “real” cluster)