How to configure Java Memory in a Docker Container
When running a Java application in a Docker container, it is important to properly configure the JVM memory settings. This is essential to ensure that the application has enough memory to run correctly.
If you don’t specify a value for the —memory flag when starting a Docker container, the container will be given an automatic default memory limit. The default memory limit depends on how you are running Docker.
If you are using Docker on a system with a Linux kernel that supports Control Groups (Cgroups), the default memory limit for a container is :
This means that if you are running Docker on a system with 2GB of available memory, and you are not running any other containers, the default memory limit for a container will be something like 1.5GB to 1.8GB, depending on the specific version of Docker and the configuration of the host system.
If you are using Docker on a system that does not support Control Groups (Cgroups), such as Windows or macOS, the default memory limit for a container is generally much lower, typically around 2GB.
In general, it is a good idea to specify a value for the —memory flag when starting a Docker container, to ensure that the container has a consistent amount of memory available to it and to prevent the container from consuming too much memory.
To properly set the JVM heap size in a Docker container, you should use the -Xmx option and you should also set the —memory flag when starting the container to set the memory limits for the container. This will ensure that the JVM has enough memory available to allocate the desired amount of heap memory.
Java Memory Configuration examples
After the theory, let’s see a practical example. We will start a Java application with OpenJDK17 that prints the MaxMemory available.
Here’s the Dockerfile:
FROM openjdk:17 COPY ./Example.class /tmp WORKDIR /tmp ENTRYPOINT exec java Example
Now execute the Container Image without any memory constraint:
docker run -it javatest Max Heap Size = maxMemory() = 16777216000
Without any setting, the JVM Max Heap size is about 16,77 GB.
Next, let’s set a Memory Constraint on the docker environment with the –memory option:
docker run -it --memory 2g javatest Max Heap Size = maxMemory() = 536870912
As you can see, the JVM can now use up to 536 MB Max Memory.
Next, we will try setting the -Xmx Java option in combination with the Docker –memory :
docker run -it -e JAVA_OPTS="-Xmx1512m" --memory 2g javatest Max Heap Size = maxMemory() = 536870912
What happened ? as you can see the JAVA_OPTS was ineffective to set the Java Max Memory. The reason for that is that JAVA_OPTS is not a standard option to set Java Memory Settings. To use the JAVA_OPTS environment variable, we need to modify the Dockerfile as follows:
FROM openjdk:17 COPY ./Example.class /tmp WORKDIR /tmp ENTRYPOINT exec java $JAVA_OPTS Example
Then, rebuild the image and verify that the -Xmx setting is effective:
docker build -t javatest . docker run -it -e JAVA_OPTS="-Xmx1512m" --memory 2g javatest Max Heap Size = maxMemory() = 1585446912
Finally, if you want to inject the JVM Settings without a change in the Dockerfile, you can use the JAVA_TOOL_OPTIONS. The JAVA_TOOL_OPTIONS is recognized by all JVM Container images. For example:
docker run -it -e JAVA_TOOL_OPTIONS="-Xmx1512m" --memory 2g javatest Picked up JAVA_TOOL_OPTIONS: -Xmx1512m Max Heap Size = maxMemory() = 1585446912
This article discussed the basics of setting Java memory in a Docker container. Please note that Java Container Image, such as WildFly Container Image, may have specific environment variables to set the Java Memory. For example: Configuring JVM settings on Openshift
Recent Posts
Improved Docker Container Integration with Java 10
Many applications that run in a Java Virtual Machine (JVM), including data services such as Apache Spark and Kafka and traditional enterprise applications, are run in containers. Until recently, running the JVM in a container presented problems with memory and cpu sizing and usage that led to performance loss. This was because Java didn’t recognize that it was running in a container. With the release of Java 10, the JVM now recognizes constraints set by container control groups (cgroups). Both memory and cpu constraints can be used manage Java applications directly in containers, these include:
- adhering to memory limits set in the container
- setting available cpus in the container
- setting cpu constraints in the container
Java 10 improvements are realized in both Docker for Mac or Windows and Docker Enterprise Edition environments.
Container Memory Limits
Until Java 9 the JVM did not recognize memory or cpu limits set by the container using flags. In Java 10, memory limits are automatically recognized and enforced.
Java defines a server class machine as having 2 CPUs and 2GB of memory and the default heap size is ¼ of the physical memory. For example, a Docker Enterprise Edition installation has 2GB of memory and 4 CPUs. Compare the difference between containers running Java 8 and Java 10. First, Java 8:
docker container run -it -m512 --entrypoint bash openjdk:latest $ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize uintx MaxHeapSize := 524288000 openjdk version "1.8.0_162"
The max heap size is 512M or ¼ of the 2GB set by the Docker EE installation instead of the limit set on the container to 512M. In comparison, running the same commands on Java 10 shows that the memory limit set in the container is fairly close to the expected 128M:
docker container run -it -m512M --entrypoint bash openjdk:10-jdk $ docker-java-home/bin/java -XX:+PrintFlagsFinal -version | grep MaxHeapSize size_t MaxHeapSize = 134217728 openjdk version "10" 2018-03-20
Setting Available CPUs
By default, each container’s access to the host machine’s CPU cycles is unlimited. Various constraints can be set to limit a given container’s access to the host machine’s CPU cycles. Java 10 recognizes these limits:
docker container run -it --cpus 2 openjdk:10-jdk jshell> Runtime.getRuntime().availableProcessors() $1 ==> 2
All CPUs allocated to Docker EE get the same proportion of CPU cycles. The proportion can be modified by changing the container’s CPU share weighting relative to the weighting of all other running containers. The proportion will only apply when CPU-intensive processes are running. When tasks in one container are idle, other containers can use the leftover CPU time. The actual amount of CPU time will vary depending on the number of containers running on the system. These can be set in Java 10:
docker container run -it --cpu-shares 2048 openjdk:10-jdk jshell> Runtime.getRuntime().availableProcessors() $1 ==> 2
The cpuset constraint sets which CPUs allow execution in Java 10.
docker run -it --cpuset-cpus="1,2,3" openjdk:10-jdk jshell> Runtime.getRuntime().availableProcessors() $1 ==> 3
Allocating memory and CPU
With Java 10, container settings can be used to estimate the allocation of memory and CPUs needed to deploy an application. Let’s assume that the memory heap and CPU requirements for each process running in a container has already been determined and JAVA_OPTS set. For example, if you have an application distributed across 10 nodes; five nodes require 512Mb of memory with 1024 CPU-shares each and another five nodes require 256Mb with 512 CPU-shares each. Note that 1 CPU share proportion is represented by 1024.
For memory, the application would need 5Gb allocated at minimum.
The application would require 8 CPUs to run efficiently.
Best practice suggests profiling the application to determine the memory and CPU allocations for each process running in the JVM. However, Java 10 removes the guesswork when sizing containers to prevent out of memory errors in Java applications as well allocating sufficient CPU to process work loads.