Java based api memory consumption out of control

Full disclosure: Admittedly, I am new to releasing java written products out into production. I come from a PHP and Python background, so please forgive any ignorance on my part.

I wrote a “microservice” (not true definition, just a small api) using Spark with Kotlin. There’s as little as a few endpoints, it’s very small. It connects to a MySQL database and sends moderate length payloads in json. This thing in no way does anything more complicated than take in some parameters, craft an sql query, and spit out the results pretty much verbatim.

That said, this thing is launched by compiling a fat jar, mounting it into the official java 8 docker container, executing the jar as the entrypoint. It’s deployed via Kubernetes, and allocated 2gb of memory.

It’s been up for about .. 6 months now, immediately taking the brunt of 10s of thousands of requests per day, and for the most part it runs smoothly.

There’d be an occasional fall over that Kube would auto-restart, and I’ve taken great care to gracefully handle any problem in the code (Uncaught exceptions, potential fault points, that kind of thing)

Lately however it’s gotten bonkers. Hundreds of fall overs in a couple of weeks and some straight up down time and service disruptions. Our traffic needs hasn’t changed much at all, very consistent.

It seems the problem is memory. This has lead my boss to believe there may be a memory leak but I’m not quite sure. I was under the impression I could control an upper memory limit with the JVM switches but mistakenly didn’t know they only affected the heap size. Having done so, doesn’t look like it’s helping all that much. I have adjusted the max thread size and timeouts as per the docs of Spark as well.

So if you were deploying a microservice with a target of 2gb of memory, what kinds of settings would you use to keep the memory usage within the upper limit? What can I do?

Thank you!