Node.js: managing memory available to applications running in containers

Original author: Ravali Yatham
  • Transfer
When running Node.js applications in Docker containers, traditional memory settings do not always work as expected. The material, the translation of which we publish today, is dedicated to finding the answer to the question of why this is so. It will also provide practical recommendations for managing the memory available to Node.js applications running in containers.

Review of recommendations

Suppose a Node.js application runs in a container with a set memory limit. If we are talking about Docker, then an option could be used to set this limit --memory. Something similar is possible when working with container orchestration systems. In this case, it is recommended that when starting the Node.js application, use the option --max-old-space-size. This allows you to inform the platform about how much memory is available to it, and also take into account the fact that this amount should be less than the limit set at the container level.

When the Node.js application runs inside the container, set the capacity of the memory available to it in accordance with the peak value of active memory use by the application. This is done if the container memory limits can be configured.

Now let's talk about the problem of using memory in containers in more detail.

Docker Memory Limit

By default, containers do not have resource limits and can use as much memory as the operating system allows them. The command docker runhas command line options that allow you to set limits regarding the use of memory or processor resources.

The container launch command might look like this:

docker run --memory  --interactive --tty  bash

Please note the following:

  • x- This is the limit of the amount of memory available to the container, expressed in units of measure y.
  • ycan take the value b(bytes), k(kilobytes), m(megabytes), g(gigabytes).

Here is an example of a container launch command:

docker run --memory 1000000b --interactive --tty  bash

Here, the memory limit is set to 1000000bytes.

To check the memory limit set at the container level, you can, in the container, run the following command:

cat /sys/fs/cgroup/memory/memory.limit_in_bytes

Let's talk about the behavior of the system when specifying the --max-old-space-sizeNode.js application using the memory limit key . In this case, this memory limit will correspond to the limit set at the container level.

What is called "old-space" in the key name is one of the fragments of the heap controlled by V8 (the place where the "old" JavaScript objects are placed). This key, if you do not go into the details that we touch below, controls the maximum heap size. Details on Node.js command line switches can be found here .

In general, when an application tries to use more memory than is available in the container, its operation is terminated.

In the following example (the application file is called test-fatal-error.js) in an arraylist, with an interval of 10 milliseconds, place objects MyRecord. This leads to uncontrolled heap growth, simulating a memory leak.

'use strict';
const list = [];
setInterval(()=> {
  const record = new MyRecord();
function MyRecord() {
  var x='hii'; = x.repeat(10000000); = x.repeat(10000000);
  this.account = x.repeat(10000000);
setInterval(()=> {

Please note that all the examples of programs that we will be discussing here are placed in the Docker image, which can be downloaded from the Docker Hub:

docker pull ravali1906/dockermemory

You can use this image for independent experiments.

In addition, you can pack the application in a Docker container, collect the image and run it with the memory limit:

docker run --memory 512m --interactive --tty ravali1906/dockermemory bash

Here ravali1906/dockermemoryis the name of the image.

Now you can start the application by specifying a memory limit for it that exceeds the container limit:

$ node --max_old_space_size=1024 test-fatal-error.js
{ rss: 550498304,
heapTotal: 1090719744,
heapUsed: 1030627104,
external: 8272 }

Here, the key --max_old_space_sizerepresents the memory limit indicated in megabytes. The method process.memoryUsage()gives information about memory usage. Values ​​are expressed in bytes.

The application at some point in time is forcibly terminated. This happens when the amount of memory used by it crosses a certain border. What is this border? What limitations on the amount of memory can we talk about?

The expected behavior of an application running with the key is - max-old-space-size

By default, the maximum heap size in Node.js (up to version 11.x) is 700 MB on 32-bit platforms, and 1400 MB on 64-bit ones. You can read about setting these values here .

In theory, if you use a key to set a --max-old-space-sizememory limit that exceeds the container's memory limit, you can expect the application to be terminated by the Linux OOM Killer kernel kernel security mechanism.

In reality, this may not happen.

The actual behavior of the application running with the key is max-old-space-size

Application, immediately after the start, not the entire allocated memory, which is the limit specified with --max-old-space-size. The size of the JavaScript heap depends on the needs of the application. The size of the memory the application uses can be judged on the basis of the field value heapUsedfrom the object returned by the method process.memoryUsage(). In fact, we are talking about the memory allocated in the heap for objects.

As a result, we come to the conclusion that the application will be forcibly terminated if the heap size is greater than the limit set by the key --memorywhen the container starts.

But in reality this may not happen either.

When profiling resource-intensive Node.js applications that run in containers with a given memory limit, the following patterns can be observed:

  1. OOM Killer is triggered much later than the moment when the values heapTotaland heapUsedturn out to be significantly exceeding the memory limits.
  2. OOM Killer does not respond to exceeding limits.

An explanation of the behavior of Node.js applications in containers

A container oversees one important indicator of the applications that run on it. This is RSS (resident set size). This indicator represents a certain part of the virtual memory of the application.

Moreover, it is a piece of memory that is allocated to the application.

But that is not all. RSS is part of the active memory allocated to the application.

Not all memory allocated to an application may be active. The fact is that “allocated memory” is not necessarily physically allocated until the process begins to really use it. In addition, in response to requests for memory allocation from other processes, the operating system can dump inactive parts of the application memory into the page file and transfer the freed space to other processes. And when the application again needs these pieces of memory, they will be taken from the swap file and returned to physical memory.

The RSS metric indicates the amount of active and available memory for the application in its address space. It is he who influences the decision on the forced shutdown of the application.


▍ Example No. 1. An application that allocates memory for a buffer

In the following example buffer_example.js,, shows a program that allocates memory for a buffer:

const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024)
console.log(Math.round(buf.length / (1024 * 1024)))
console.log(Math.round(process.memoryUsage().rss / (1024 * 1024)))

In order for the amount of memory allocated by the program to exceed the limit set when the container was launched, first run the container with the following command:

docker run --memory 1024m --interactive --tty ravali1906/dockermemory bash

After that, run the program:

$ node buffer_example 2000

As you can see, the system did not complete the program, although the memory allocated by the program exceeds the container limit. This happened due to the fact that the program does not work with all the allocated memory. RSS is very small, it does not exceed the container memory limit.

▍ Example No. 2. Application populating the buffer with data

In the following example,, the buffer_example_fill.jsmemory is not just allocated, but also filled with data:

const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024,'x')
console.log(Math.round(buf.length / (1024 * 1024)))
console.log(Math.round(process.memoryUsage().rss / (1024 * 1024)))

Run the container:

docker run --memory 1024m --interactive --tty ravali1906/dockermemory bash

After that, run the application:

$ node buffer_example_fill.js 2000

Apparently, even now the application does not end! Why? The fact is that when the amount of active memory reaches the limit set when the container was started, and there is room in the page file, some of the old pages in the process memory are moved to the page file. Released memory is made available to the same process. By default, Docker allocates space for the page file equal to the memory limit set using the flag --memory. Given this, we can say that the process has 2 GB of memory - 1 GB in active memory, and 1 GB in the page file. That is, due to the fact that the application can use its own memory, the contents of which are temporarily moved to the page file, the size of the RSS index is within the container limit. As a result, the application continues to work.

▍ Example No. 3. An application that fills a buffer with data running in a container that does not use a page file

Here is the code we will experiment with here (this is the same file buffer_example_fill.js):

const buf = Buffer.alloc(+process.argv[2] * 1024 * 1024,'x')
console.log(Math.round(buf.length / (1024 * 1024)))
console.log(Math.round(process.memoryUsage().rss / (1024 * 1024)))

This time, run the container, explicitly setting up the features of working with the swap file:

docker run --memory 1024m --memory-swap=1024m --memory-swappiness=0 --interactive --tty ravali1906/dockermemory bash

Launch the application:

$ node buffer_example_fill.js 2000

See the message Killed? When the key value is --memory-swapequal to the key value --memory, this tells the container that it should not use the page file. In addition, by default, the kernel of the operating system in which the container itself runs can dump a certain amount of anonymous memory pages used by the container into the page file. By setting the 0flag --memory-swappiness, we disable this feature. As a result, it turns out that the paging file is not used inside the container. The process ends when the RSS metric exceeds the container's memory limit.

General recommendations

When Node.js-application launch key --max-old-space-size, the value of which exceeds the memory limit specified when the container is started, it may seem that Node.js «does not pay attention" to the container limit. But, as seen from previous examples, the obvious reason for this behavior is the fact that the application simply does not use the entire heap volume specified by the flag --max-old-space-size.

Remember that the application will not always behave the same if it uses more memory than is available in the container. Why? The fact is that the process’s active memory (RSS) is influenced by many external factors that the application itself cannot influence. They depend on the load on the system and on the characteristics of the environment. For example, these are features of the application itself, the level of parallelism in the system, features of the operating system scheduler, features of the garbage collector, and so on. In addition, these factors, from launch to launch, may change.

Recommendations on setting the size of the Node.js heap for those cases when you can control this option, but not with container-level memory restrictions

  • Run the minimum Node.js application in the container and measure the static RSS size (in my case, for Node.js 10.x, this is about 20 Mb).
  • The Node.js heap contains not only the old_space, but also others (such as new_space, code_space, and so on). Therefore, if you take into account the standard configuration of the platform, you should rely on the fact that the program will need about 20 MB more memory. If the standard settings have changed, these changes must also be taken into account.
  • Now we need to subtract the obtained value (suppose it will be 40 MB) from the amount of memory available in the container. What remains is a value that, without fear of program termination from running out of memory, can be specified as a key value --max-old-space-size.

Recommendations for setting container memory limits for cases where this parameter can be controlled, but Node.js application parameters are not

  • Run the application in modes that allow you to find out the peak values ​​of the memory consumed by it.
  • Analyze the RSS score. In particular, here, along with the method process.memoryUsage(), the Linux command may come in handy top.
  • Provided that in the container in which it is planned to run the application, nothing but it will not be executed, the obtained value can be used as the container memory limit. In order to be safe, it is recommended to increase it by at least 10%.


In Node.js 12.x, some of the problems discussed here are solved by adaptively adjusting the size of the heap, which is performed in accordance with the amount of available RAM. This mechanism also works when running Node.js applications in containers. But the settings may differ from the default settings. This, for example, occurs in cases where the application starts with the key used --max_old_space_size. For such cases, all of the above remains relevant. This suggests that anyone who runs Node.js applications in containers should be careful and responsible about the memory settings. In addition, knowledge of the standard limits on memory usage, which is rather conservative, can improve application performance by deliberately changing these limits.

Dear readers! Have you run out of memory problems when running Node.js applications in Docker containers?

Also popular now: