Beer reviews and other ramblings

Running Out Of Memory-Mapped Files with Apache Kafka

Running Out Of Memory-Mapped Files with Apache Kafka

This was a fun one to track down. Long story short, because of the large amount of partitions (and topics) we had, the number of Kafka segment files exceeded the number of memory-mapped files available to the JVM, which then crashed. However, it crashed only after checksumming and mapping terabytes of data, which takes more than an hour. It’s always fun to try to fix things which take over an hour to test.

The error the JVM returns when this happens is a general one: “There is insufficient memory for the Java Runtime Environment to continue.” Of course I tried increasing the available heap space as the error suggests (multiple times), but that didn’t fix the problem. In the end, a coworker was looking at the mmap man page, and it clicked that we’re probably running out of available mapped files (thanks, Tye).

So, this is here with the hope that someone in need will find it, and they won’t have to spend hours tracking it down.

The Error

The Fix

Allow the JVM to use more mmap pointers by increasing the value of vm.max_map_count:

Add the above option to /etc/sysctl.conf to set the value permanently. You can verify the setting is correct after a reboot with:

Did this fix your problem?

Please leave a comment about your experience. Confirmation about problems like this can go a long way toward helping other people in the future.


Featured image photo credit

Bartosz Kwitkowski



Leave a Reply

Your email address will not be published. Required fields are marked *