kafka 8 和内存 - 内存不足,无法继续 Java 运行时环境

2022-09-01 08:12:48

我正在使用具有512兆内存的DigiOcean实例,我得到了以下带有kafka的错误。我不是一个精通Java的开发人员。如何调整卡夫卡以利用少量的RAM。这是一个开发服务器。我不想为一台更大的机器每小时支付更多的钱。

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid6500.log
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000bad30000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)

答案 1

您可以通过编辑 等方式调整 JVM 堆大小:kafka-server-start.shzookeeper-server-start.sh

export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"

该参数指定最小堆大小。要使服务器至少启动,请尝试将其更改为使用较少的内存。假设您只有 512M,您也应该更改最大堆大小 ():-Xms-Xmx

export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

我不确定默认配置中kafka的最低内存要求是什么 - 也许您需要调整kafka中的消息大小才能使其运行。


答案 2

区域:火锅/ gc

概要

Crashes due to failure to allocate large pages.

On Linux, failures when allocating large pages can lead to crashes. When running JDK 7u51 or later versions, the issue can be recognized in two ways:

    Before the crash happens, one or more lines similar to the following example will have been printed to the log:

    os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
    error='Cannot allocate memory' (errno=12); Cannot allocate large pages, 
    falling back to regular pages

    If a file named hs_err is generated, it will contain a line similar to the following example:

    Large page allocation failures have occurred 3 times

The problem can be avoided by running with large page support turned off, for example, by passing the "-XX:-UseLargePages" option to the java binary.

推荐