Reshape during node restart using too much disk space?

I noticed on my nodes while upgrading from 5.1.0 to 5.1.2 that on one node it decided to do a reshape operation (that took a couple hours).

During this I happened to be just df’ing and noticed it used quite a bit of disk shape, roughly double the normal disk space:

pre-reshape:
/dev/nvme1n1 3661060799 1318457771 2342603028 37% /var/lib/scylla

during reshape, near the end of it:
/dev/nvme1n1 3661060799 2582195011 1078865788 71% /var/lib/scylla

My concern is what would have happened if my node started out at over 50% disk used? Would the reshape be able to complete?

I’m using only LCS compaction strategy, so I thought it would be “safe” to go over 50% disk used, but is that an incorrect assumption?

That’s a problem we need to address. Brian, could you please open a github issue with all this info?

Link to github issue: Reshape during node restart using too much disk space? · Issue #12495 · scylladb/scylladb · GitHub