Kafka 使用者异常和偏移量提交

我一直在尝试为Spring Kafka做一些POC工作。具体来说,我想尝试在Kafka中使用消息时处理错误的最佳实践。

我想知道是否有人能够帮助:

  1. 分享有关 Kafka 消费者在发生故障时应执行的操作的最佳实践
  2. 帮助我了解 AckMode Record 的工作原理,以及如何防止在侦听器方法中引发异常时提交到 Kafka 偏移队列。

下面给出了 2 的代码示例:

鉴于AckMode设置为RECORD,根据文档

当侦听器在处理记录后返回时提交偏移量。

我本来以为如果侦听器方法引发异常,偏移量不会增加。但是,当我使用下面的代码/配置/命令组合测试它时,情况并非如此。偏移量仍会更新,并且继续处理下一条消息。

我的配置:

    private Map<String, Object> producerConfigs() {
    Map<String, Object> props = new HashMap<>();
    props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.0.1:9092");
    props.put(ProducerConfig.RETRIES_CONFIG, 0);
    props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
    props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
    props.put(ProducerConfig.BUFFER_MEMORY_CONFIG, 33554432);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    return props;
}

   @Bean
ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
    ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
            new ConcurrentKafkaListenerContainerFactory<>();
    factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
    factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.RECORD);
    return factory;
}

我的代码:

@Component
public class KafkaMessageListener{
    @KafkaListener(topicPartitions = {@TopicPartition( topic = "my-replicated-topic", partitionOffsets = @PartitionOffset(partition = "0", initialOffset = "0", relativeToCurrent = "true"))})
    public void onReplicatedTopicMessage(ConsumerRecord<Integer, String> data) throws InterruptedException {
            throw new RuntimeException("Oops!");
    }

用于验证偏移的命令:

bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group test-group

我使用的是kafka_2.12-0.10.2.0和org.springframework.kafka:spring-kafka:1.1.3.RELEASE


答案 1

容器 (via ) 有一个属性,默认情况下为 true...ContainerPropertiesackOnError

/**
 * Set whether or not the container should commit offsets (ack messages) where the
 * listener throws exceptions. This works in conjunction with {@link #ackMode} and is
 * effective only when the kafka property {@code enable.auto.commit} is {@code false};
 * it is not applicable to manual ack modes. When this property is set to {@code true}
 * (the default), all messages handled will have their offset committed. When set to
 * {@code false}, offsets will be committed only for successfully handled messages.
 * Manual acks will be always be applied. Bear in mind that, if the next message is
 * successfully handled, its offset will be committed, effectively committing the
 * offset of the failed message anyway, so this option has limited applicability.
 * Perhaps useful for a component that starts throwing exceptions consistently;
 * allowing it to resume when restarted from the last successfully processed message.
 * @param ackOnError whether the container should acknowledge messages that throw
 * exceptions.
 */
public void setAckOnError(boolean ackOnError) {
    this.ackOnError = ackOnError;
}

但是,请记住,如果下一条消息成功,则无论如何都会提交其偏移量,这也有效地提交了失败的偏移量。

编辑

从版本 2.3 开始,现在是默认的。ackOnErrorfalse


答案 2

推荐