JVM memory allocation

Original author: Andy Wilkinson
  • Transfer
Hello! We want to coincide the translation of today's material with the launch of a new thread in the Java Developer course , which starts tomorrow. Well, let's begin.

JVM can be a complex beast. Fortunately, most of this complexity is hidden under the hood, and we, as application developers and responsible for the deployment, often do not have to worry much about it. Although due to the growing popularity of technology for deploying applications in containers, you should pay attention to memory allocation in the JVM.

Two kinds of memory

The JVM divides memory into two main categories: heap and non-heap. A heap is a piece of JVM memory that developers are most familiar with. The objects created by the application are stored here. They remain there until they are removed by the garbage collector. Typically, the heap size an application uses varies depending on the current load.

Out-of-heap memory is divided into several areas. In HotSpot, you can use the Native memory tracking (NMT) mechanism to explore the areas of this memory . Note that although NMT does not track the use of all native memory ( for example, the allocation of native memory by third-party code is not monitored), its capabilities are enough for most typical Spring applications. To use NMT, run the application with the option -XX:NativeMemoryTracking=summaryand usingjcmd VM.native_memory summary see information on used memory.

Let's take a look at using NMT as an example from our old friend Petclinic
. The diagram below shows the use of JVM memory according to NMT data (minus its own NMT overhead) when starting Petclinic with a maximum heap size of 48 MB ( -Xmx48M):

As you can see, out of the heap memory, most of the JVM memory is used, and the heap memory is only one sixth part of the total volume. In this case, it is approximately 44 MB (of which 33 MB was used immediately after garbage collection). Out-of-heap memory usage totaled 223 MB.

Areas of native memory

Compressed class space(area of ​​compressed pointers): used to store information about loaded classes. Limited to parameter MaxMetaspaceSize. A function of the number of classes that have been loaded.

Translator's Note

For some reason, the author writes about “Compressed class space”, and not about the entire area of ​​“Class”. The Compressed class space area is part of the Class area, and the parameter MaxMetaspaceSizelimits the size of the entire Class area, not just the Compressed class space. To limit the "Compressed class space" parameter is used CompressedClassSpaceSize.

Hence :
If UseCompressedOopsis turned on and UseCompressedClassesPointersis used, then two logically different areas of native memory are used for class metadata ...
A region is allocated for these compressed class pointers (the 32-bit offsets). The size of the region can be set with CompressedClassSpaceSizeand is 1 gigabyte (GB) by default ...
The MaxMetaspaceSizeapplies to the sum of the committed compressed class space and the space for the other class metadata

If the parameter is enabled UseCompressedOopsand used UseCompressedClassesPointers, then two logically different areas of native memory are used for class metadata ...

For compressed pointers, a memory area (32-bit offsets) is allocated. The size of this area can be set CompressedClassSpaceSizeand by default it is 1 GB ...
The parameter MaxMetaspaceSizerefers to the sum of the compressed pointer area and the area for other class metadata.

  • Thread: The memory used by threads in the JVM. The function of the number of running threads.
  • Code cache: The memory used by JIT to run it. A function of the number of classes that have been loaded. Limited to parameter ReservedCodeCacheSize. You can reduce the setting of JIT, for example, by disabling tiered compilation.
  • GC (garbage collector): stores the data used by the garbage collector. Depends on the garbage collector used.
  • Symbol: stores characters such as field names, method signatures, and interned strings. Excessive use of character memory may indicate that lines are too interned.
  • Internal: stores other internal data that is not included in any of the other areas.


Compared to a heap, memory outside the heap changes less under load. As soon as the application loads all the classes that will be used and the JIT is fully warmed up, everything will go into a stable state. To see a decrease in the use of the Compressed class space , the class loader that loaded the classes must be removed by the garbage collector. This was common in the past when applications were deployed in servlet containers or application servers (the application class loader was deleted by the garbage collector when the application was removed from the application server), but this rarely happens with modern approaches to application deployment.

Configure JVM

Configuring the JVM to efficiently use the available RAM is not easy. If you run the JVM with the parameter -Xmx16Mand expect no more than 16 MB of memory to be used, then you will get an unpleasant surprise.

An interesting area of ​​JVM memory is the JIT code cache. By default, HotSpot JVM will use up to 240 MB. If the code cache is too small, the JIT may not have enough space to store its data, and as a result, performance will be reduced. If the cache is too large, then the memory may be wasted. When determining the size of a cache, it is important to consider its effect on both memory usage and performance.

When working in a Docker container, the latest versions of Java now knowabout container memory limits and try to resize the JVM memory accordingly. Unfortunately, a lot of memory is often allocated outside the heap and not enough in the heap. Suppose you have an application running in a container with 2 processors and 512 MB of available memory. You want to handle more workload and increase the number of processors to 4 and memory to 1 GB. As we discussed above, heap size usually varies with load, and memory outside the heap changes much less. Therefore, we expect that most of the additional 512 MB will be allocated to the heap to handle the increased load. Unfortunately, by default, the JVM will not do this and will distribute additional memory more or less evenly between the memory on the heap and off the heap.

Fortunately, the CloudFoundry team has extensive knowledge of memory allocation in the JVM. If you download applications to CloudFoundry, then the build pack will automatically apply this knowledge to you. If you are not using CloudFoudry or would like to understand more about how to configure JVM, it is recommended to read the description of the third version of Java buildpack's memory calculator .

What does this mean for Spring

The Spring team spends a lot of time thinking about performance and memory usage, considering the possibility of using memory both on the heap and off the heap. One way to limit off-heap memory usage is to make parts of the framework as versatile as possible. An example of this is using Reflection to create and inject dependencies into the beans of your application. Through the use of Reflection, the amount of framework code that you use remains constant, regardless of the number of beans in your application. To optimize the startup time, we use the cache on the heap, clearing this cache after the launch is complete. Heap memory can be easily cleaned up by the garbage collector to provide more available memory to your application.

Traditionally, we are waiting for your comments on the material.

Also popular now: