Scylla is killed due to space issue

Installation details
#ScyllaDB version: 6.2
#Cluster size: 3 nodes
os (RHEL/CentOS/Ubuntu/AWS AMI): amzn2 ec2

Hi Folks, We have a scylla operator that is deployed to k8s using helm, but we keep getting disk pressure tain and pods are evicted due to following error:
Warning Evicted 7m9s kubelet The node was low on resource: ephemeral-storage. Threshold quantity: 857733132, available: 4252Ki. Container scylla was using 152416Ki, request is 0, has larger consumption of ephemeral-storage. Container scylla-manager-agent was using 88Ki, request is 0, has larger consumption of ephemeral-storage. Container scylladb-api-status-probe was using 8Ki, request is 0, has larger consumption of ephemeral-storage. Container scylladb-ignition was using 384Ki, request is 0, has larger consumption of ephemeral-storage.
Normal Killing 7m9s kubelet Stopping container scylla
I have given 256mb ephemeral space to the pods, what I am missing, thank you for your help!

Your ScyllaDB pods are being evicted because they are exceeding the ephemeral storage allocation you’ve set (256MB).

The logs clearly state:

The node was low on resource: ephemeral-storage.

256MB ephemeral storage is extremely limited for running ScyllaDB, even for just logs and basic container overhead.

You have two practical solutions here:

  1. Increase ephemeral storage requests and limits
    In your Helm values file (values.yaml), increase ephemeral storage requests and limits, for example:
resources:
  requests:
    ephemeral-storage: "2Gi"
  limits:
    ephemeral-storage: "4Gi"

Adjust based on your environment, number of nodes, and expected log verbosity.

  1. Use Persistent Storage for Logs
1 Like