Memory leak cleaning up

hello I am running scylla 5.1.0 on linux centos7 on a 3 node cluster 128 GB RAM , 32 cores
My application is writing around 500 million writes and reads per day in 1 table per hour Peaks at 20k /s with key being just 32 bytes and value ranges from 1kb to 200kb in blobs
All reads and writes are thru multi routine golang programs
My question is
Is there a limit on number of golang connections
ScyllaDB FAQ | ScyllaDB Docs states max 32 connections
2. How to clean up the memory leaks ? After every 5-6 days the memory of the machine is exhausted I need to restart scylladb node by node. Is there a better way ?

What do you mean by “memory of the machine is exhausted”? What are the symptoms you observe?

Also, 5.1.0 reached its end of life. Please upgrade to the latest version and see if you are still having issues.