- What is your experience? Can you confirm my experimental findings?
- Can I generally use
total RAM - 600 MB, or0.4 * total RAM? - Or is it always trial and error, and hoping that it is low enough?
Context: I'm trying to set up jenkins on a T3 instance, experimenting with Ubuntu Server 16.04 and 18.04.
I started with a t3.micro instance (1 GB RAM), but found the OOM killer killing my java process, as soon as I use more than about -Xmx400m, which seems kind of low. I was expecting to be able to use more like -Xmx750m.
Does this mean Ubuntu Server requires about 600 MB to work?
The problem is that the java process starts, even if I set both -Xms and -Xmx to a very high value, like 700m. The process is killed only later when I make the first request to the website.
I now experiment with a t3.small instance (2 GB of RAM), but am again very unsure about what to configure.
On Windows it is kind of deterministic: I set both -Xms and -Xmx to the same value. If the service fails to start, the value was too high. If the service starts successfully, the value is fine, and the memory is reserved for my process.
Some background:
- https://plumbr.io/blog/memory-leaks/why-does-my-java-process-consume-more-memory-than-xmx
- https://support.cloudbees.com/hc/en-us/articles/115002718531-Memory-Problem-Process-killed-by-OOM-Killer
- Avoid linux out-of-memory application teardown
- Effects of configuring vm.overcommit_memory
- PostgreSQL seems to have a similar problem: https://dba.stackexchange.com/questions/170596/permanently-disable-oom-killer-for-postgres (and https://www.postgresql.org/docs/current/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT)
malloc()or something like that) the linux kernel will decide whether or not it can allocate (reserve) that amount of memory for the given program. With the default settings the kernel usually decides that almost every allocation is ok and will allocate (reserve) way more memory than is present in the system. This behaviour is desirable, because most memory that is allocated is never used (or at least it is never written to) so the over-allocation is required to be able to use the whole system memory. – Andreas Rogge May 09 '19 at 21:52- add more memory to the system
- disable memory over-commit so things will fail really early on allocation
- make your applications use less memory (they can still allocate as much as before)
– Andreas Rogge May 09 '19 at 21:56