2. Spring Cloud Stream-生产环境的kafka整合示例

介绍

  • 简单、高效、稳定地使用Spring Cloud Stream。
  • 生产者:如何确保消息不丢失,如何动态向多个主题发送消息并自动创建主题,发送消息失败如何兜底处理。
  • 消费者:如何确保消息不丢失,如何消费多个主题,如何批量消费。

版本说明

  • kafka server version:2.5.x
  • kafka client version:2.5.1
  • spring boot version: 2.3.12.RELEASE
  • spring cloud version:Hoxton.SR12
  • spring cloud stream version: 3.0.13.RELEASE
  • spring cloud stream binder kafka version: 3.0.13.RELEASE
  • java version:1.8

其他版本的完整代码示例,请访问 github.com/codebaorg/S...

如果这篇文章帮助到了你,欢迎评论、点赞、转发。

依赖

Maven

xml 复制代码
<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.3.12.RELEASE</version>
    <relativePath/>
</parent>

<properties>
    <spring-cloud.version>Hoxton.SR12</spring-cloud.version>
</properties>

<dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-dependencies</artifactId>
            <version>${spring-cloud.version}</version>
            <type>pom</type>
            <scope>import</scope>
        </dependency>
    </dependencies>
</dependencyManagement>

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream</artifactId>
    </dependency>

    <dependency>
        <groupId>org.springframework.cloud</groupId>
        <artifactId>spring-cloud-stream-binder-kafka</artifactId>
    </dependency>

</dependencies>

工程和配置

完整代码地址:github.com/codebaorg/S...

application.yaml 配置

spring cloud stream kafka相关配置内容示例如下,其中localhost:9092test-prod.*-topic,footest-prod-groupmin-partition-countreplication-factor替换为你的配置:

yaml 复制代码
spring:
  cloud:
    stream:
      default:
        producer:
          error-channel-enabled: true # 开启生产者默认错误信息收集的channel

      kafka:
        binder:
          brokers: localhost:9092
          auto-create-topics: true # 开启自动创建主题
          min-partition-count: 3 # 单个主题的分区数
          replication-factor: 3 # 单个主题的副本数,这个是同时包含主从副本的数量
          configuration:
            acks: -1 # 见配置项说明
            reconnect.backoff.max.ms: 120000 # 见配置项说明

        bindings:
          my-prod-input:
            consumer:
              auto-commit-offset: false # 消费者关闭自动提交offset
              destination-is-pattern: true # 开启正则匹配topic

      bindings:
        my-prod-input:
          destination: test-prod.*-topic,foo # 消费多个主题,用逗号隔开
          group: test-prod-group
          consumer:
            batch-mode: true # 开启批量消费

以下配置项说明基于kafka 2.5.x版本,官方文档:kafka.apache.org/documentati...

配置项 默认值 描述
acks 1 这个参数用来指定分区中必须要有多少个副本收到这条消息,生产者才会认为这条消息是成功写入的。可以控制发送记录的持久性。
retries 2147483647 如果设置的值大于零,客户端将重新发送任何因潜在瞬时错误而发送失败的记录。请注意,这种重试与客户端在收到错误后重新发送记录没有区别。如果不将 max.in.flight.requests.per.connection 设置为 1 就允许重试,将有可能改变记录的顺序,因为如果向一个分区发送了两批记录,第一批失败并重试,但第二批成功,那么第二批中的记录可能会先出现。此外,请注意,如果 delivery.timeout.ms 配置的超时时间在成功确认之前首先到期,那么在重试次数用完之前,生产请求就会失败。一般情况下,用户最好不要设置此配置,而是使用 delivery.timeout.ms 来控制重试行为。
retry.backoff.ms 100 尝试对给定主题分区重新尝试失败的请求之前等待的时间。这可以避免在某些故障情况下以紧循环方式重复发送请求。
reconnect.backoff.max.ms 1000 重新连接到多次失败的代理时等待的最大时间(以毫秒为单位)。如果提供的话,每次连续连接故障时,每台主机的回退量都会呈指数级增加,直至此最大值。计算回退增量后,添加20%的随机抖动以避免连接风暴。
request.timeout.ms 30000 该配置可控制客户端等待请求响应的最长时间。如果在超时前未收到响应,客户端将在必要时重新发送请求,或在重试次数用尽后请求失败。该超时时间应大于 replica.lag.time.max.ms(broker配置),以减少因不必要的生产者重试而造成消息重复的可能性。
linger.ms 0 生产者会将在请求传输之间到达的任何记录合并为一个单一的批次请求。通常情况下,只有当记录到达的速度快于发送的速度时,才会出现这种情况。但在某些情况下,客户端可能希望减少请求的数量,即使是在中等负荷的情况下。这种设置通过增加少量人为延迟来实现这一目的,也就是说,生产者不会立即发送记录,而是等待给定的延迟时间,以允许其他记录被发送,这样就可以将发送的记录集中在一起。这可以被视为类似于 TCP 中的纳格尔算法。该设置给出了批处理延迟的上限:一旦我们为某个分区获得了 batch.size 值的记录,无论设置如何,都会立即发送,但如果该分区累积的字节数少于此值,我们就会在指定时间内 "徘徊",等待更多记录出现。此设置的默认值为 0(即无延迟)。例如,如果设置 linger.ms=5,则会减少发送请求的数量,但会在没有负载的情况下为发送记录增加 5 毫秒的延迟。
delivery.timeout.ms 120000 调用 send() 返回后报告成功或失败的时间上限。这限制了发送记录前的总延迟时间、等待broker确认的时间以及允许重试发送失败的时间。如果遇到无法恢复的错误、重试次数已用尽,或者记录被添加到已达到较早交付到期截止日期的批次中,生产者可能会在此配置之前报告记录发送失败。此配置的值应大于或等于 request.timeout.mslinger.ms 的总和。

acks是生产者客户端中一个非常重要的参数,允许以下设置:

  • acks=0。 如果设置为零,生产者将完全不等待服务器的任何确认。记录将被立即添加到套接字缓冲区,并视为已发送。在这种情况下,不能保证服务器已收到记录,重试配置也不会生效(因为客户端通常不会知道任何失败)。
  • acks=1。默认值即为1。生产者发送消息后,只要分区的leader副本成功写入消息,那么它就会收到来自服务端的成功响应。如果消息无法写入leader副本,比如在leader副本崩溃、重新选举新的leader副本的过程中,那么生产者就会收到一个错误的响应,为了避免消息丢失,生产者可以选择重发消息。如果消息写入leader副本并返回成功响应给生产者,且在被其他follower副本拉取之前leader副本崩溃,那么此时消息还是会丢失,因为新选举的leader副本中并没有这条对应的消息。acks设置为1,是消息可靠性和吞吐量之间的折中方案。
  • acks=-1 或 acks=all。生产者在消息发送之后,需要等待所有副本都成功写入消息之后才能够收到来自服务端的成功响应。

生产者发送消息

假设业务的消息实体示例为MyMessage类:

java 复制代码
public class MyMessage {
    private String foo;
    private Integer bar;

    public MyMessage(String foo, Integer bar) {
        this.foo = foo;
        this.bar = bar;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;
        MyMessage myMessage = (MyMessage) o;
        return Objects.equals(foo, myMessage.foo) && Objects.equals(bar, myMessage.bar);
    }

    @Override
    public int hashCode() {
        return Objects.hash(foo, bar);
    }


    @Override
    public String toString() {
        return "MyMessage{" +
                "foo='" + foo + ''' +
                ", bar=" + bar +
                '}';
    }

    public String getFoo() {
        return foo;
    }

    public void setFoo(String foo) {
        this.foo = foo;
    }

    public Integer getBar() {
        return bar;
    }

    public void setBar(Integer bar) {
        this.bar = bar;
    }
}
  1. 如何确保消息不丢失?

    首先acks设置为-1。发送失败的消息需要持久化,然后进行重新发送。

  2. 如何动态向多个主题发送消息并自动创建主题?

发送消息的代码示例(测试自动创建主题,此时kafka服务端是关闭自动创建主题):

java 复制代码
@Autowired
private BinderAwareChannelResolver channelResolver;

public void prodTest() {
    // send message
    for (int i = 0; i < 100; i++) {
        String topic = "test-prod" + i + "-topic";
        final MyMessage myMessage = new MyMessage("hello world", 2024);
        final Message<MyMessage> message = MessageBuilder.withPayload(myMessage).build();
        channelResolver.resolveDestination(topic).send(message);
    }
}
  1. 发送消息失败的兜底处理,需要添加spring.cloud.stream.default.producer.error-channel-enabled的配置,同时根据自身业务进行重新发送消息的处理。
java 复制代码
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;


@Component
public class ErrorChannelHandler {

    private static final Logger LOGGER = LoggerFactory.getLogger(ErrorChannelHandler.class);
    
    // 此处的errorChannel是spring cloud stream约定的默认channel的名称
    @StreamListener("errorChannel") 
    public void errors(Message<?> error) {
        final Object payload = error.getPayload();
        LOGGER.info("errorChannel: {}", payload);
        
        if (payload instanceof KafkaSendFailureException) {
            KafkaSendFailureException failure = (KafkaSendFailureException) payload;
            final ProducerRecord<?, ?> record = failure.getRecord();
            final Object value = record.value();
            LOGGER.info("errorChannel value: {}", new String((byte[]) value));
        }
        // 一般处理方法:将发送失败的消息持久化,后续进行重新发送
    }

}

测试生产者发送失败的兜底处理

触发测试的方法: 首先请求测试方法http://localhost:8080/send ,在循环写消息的期间,将kakfa server端停止,然后等待delivery.timeout.ms 后,就可以从日志中看到errorChannel的关键字,说明发送超时的消息已经在errorChannel中了。

java 复制代码
import org.codeba.scs.kafka.MyMessage;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.binding.BinderAwareChannelResolver;
import org.springframework.integration.support.MessageBuilder;
import org.springframework.messaging.Message;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;


@RestController
public class TestController {

    @Autowired
    private BinderAwareChannelResolver channelResolver;

    @RequestMapping("/send")
    public void sendExceptionTest() throws InterruptedException {
        // first set wrong brokers address, then send message
        for (int i = 0; i < 100; i++) {
            String topic = "foo";
            final MyMessage myMessage = new MyMessage("bar", 2024);
            final Message<MyMessage> message = MessageBuilder.withPayload(myMessage).build();
            channelResolver.resolveDestination(topic).send(message);
            Thread.sleep(1000);
        }
    }

}

日志摘要如下:

css 复制代码
2024-12-09 14:14:22.293  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:22.294  INFO --- : consumer message total:1
2024-12-09 14:14:23.223  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:23.224  INFO --- : consumer message total:2
2024-12-09 14:14:24.257  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:24.258  INFO --- : consumer message total:3
2024-12-09 14:14:25.255  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:25.256  INFO --- : consumer message total:4
2024-12-09 14:14:26.268  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:26.268  INFO --- : consumer message total:5
2024-12-09 14:14:27.263  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:27.263  INFO --- : consumer message total:6
2024-12-09 14:14:28.266  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:28.266  INFO --- : consumer message total:7
2024-12-09 14:14:29.274  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:29.275  INFO --- : consumer message total:8
2024-12-09 14:14:30.334  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:30.334  INFO --- : consumer message total:9
2024-12-09 14:14:31.294  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:31.295  INFO --- : consumer message total:10
2024-12-09 14:14:32.287  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:32.287  INFO --- : consumer message total:11
2024-12-09 14:14:33.313  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:33.314  INFO --- : consumer message total:12
2024-12-09 14:14:34.318  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:34.319  INFO --- : consumer message total:13
2024-12-09 14:14:35.316  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:35.316  INFO --- : consumer message total:14
2024-12-09 14:14:36.320  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:36.320  INFO --- : consumer message total:15
2024-12-09 14:14:37.341  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:37.341  INFO --- : consumer message total:16
2024-12-09 14:14:38.355  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:38.355  INFO --- : consumer message total:17
2024-12-09 14:14:39.348  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:39.348  INFO --- : consumer message total:18
2024-12-09 14:14:40.360  INFO --- : payload:{"foo":"bar","bar":2024} from topic:foo, partitionId:0, groupId:test-prod-group
2024-12-09 14:14:40.360  INFO --- : consumer message total:19
2024-12-09 14:14:40.752  INFO --- : [Consumer clientId=consumer-test-prod-group-3, groupId=test-prod-group] Group coordinator localhost:9092 (id: 2147483647 rack: null) is unavailable or invalid, will attempt rediscovery
2024-12-09 14:14:40.753  INFO --- : [Consumer clientId=consumer-test-prod-group-4, groupId=test-prod-group] Group coordinator localhost:9092 (id: 2147483647 rack: null) is unavailable or invalid, will attempt rediscovery
2024-12-09 14:14:40.753  INFO 57930 --- [ff-5eaa3fa91108] o.a.k.c.c.internals.AbstractCoordinator  : [Consumer clientId=consumer-anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108-2, groupId=anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108] Group coordinator localhost:9092 (id: 2147483647 rack: null) is unavailable or invalid, will attempt rediscovery
2024-12-09 14:14:40.756  INFO 57930 --- [container-0-C-1] o.a.kafka.clients.FetchSessionHandler    : [Consumer clientId=consumer-test-prod-group-4, groupId=test-prod-group] Error sending fetch request (sessionId=362465876, epoch=85) to node 0: {}.

org.apache.kafka.common.errors.DisconnectException: null

2024-12-09 14:14:40.756  INFO 57930 --- [container-0-C-1] o.a.kafka.clients.FetchSessionHandler    : [Consumer clientId=consumer-test-prod-group-3, groupId=test-prod-group] Error sending fetch request (sessionId=693320279, epoch=81) to node 0: {}.

org.apache.kafka.common.errors.DisconnectException: null

2024-12-09 14:14:40.756  INFO 57930 --- [container-0-C-1] o.a.kafka.clients.FetchSessionHandler    : [Consumer clientId=consumer-anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108-2, groupId=anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108] Error sending fetch request (sessionId=140878124, epoch=83) to node 0: {}.

org.apache.kafka.common.errors.DisconnectException: null

2024-12-09 14:14:40.845  WARN 57930 --- [container-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108-2, groupId=anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2024-12-09 14:14:40.846  WARN 57930 --- [container-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-prod-group-3, groupId=test-prod-group] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2024-12-09 14:14:40.846  WARN 57930 --- [ad | producer-2] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-2] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.

2024-12-09 14:16:23.261  WARN 57930 --- [container-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-test-prod-group-4, groupId=test-prod-group] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2024-12-09 14:16:29.077  WARN 57930 --- [container-0-C-1] org.apache.kafka.clients.NetworkClient   : [Consumer clientId=consumer-anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108-2, groupId=anonymous.f444dbf9-a014-40d8-9fff-5eaa3fa91108] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2024-12-09 14:16:29.621  WARN 57930 --- [ad | producer-2] org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-2] Connection to node 0 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
2024-12-09 14:16:41.383  INFO 57930 --- [ad | producer-2] o.c.scs.kafka.prod.ErrorChannelHandler   : errorChannel: {}

org.springframework.integration.kafka.support.KafkaSendFailureException: nested exception is org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation
	at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler$1.onFailure(KafkaProducerMessageHandler.java:614) ~[spring-integration-kafka-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at org.springframework.util.concurrent.ListenableFutureCallbackRegistry.notifyFailure(ListenableFutureCallbackRegistry.java:86) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.ListenableFutureCallbackRegistry.failure(ListenableFutureCallbackRegistry.java:158) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:100) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.SettableListenableFuture$SettableTask.done(SettableListenableFuture.java:175) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at java.base/java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:381) ~[na:na]
	at java.base/java.util.concurrent.FutureTask.setException(FutureTask.java:250) ~[na:na]
	at org.springframework.util.concurrent.SettableListenableFuture$SettableTask.setExceptionResult(SettableListenableFuture.java:163) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.SettableListenableFuture.setException(SettableListenableFuture.java:70) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.kafka.core.KafkaTemplate.lambda$buildCallback$4(KafkaTemplate.java:602) ~[spring-kafka-2.5.14.RELEASE.jar:2.5.14.RELEASE]
	at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer$1.onCompletion(DefaultKafkaProducerFactory.java:871) ~[spring-kafka-2.5.14.RELEASE.jar:2.5.14.RELEASE]
	at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1356) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:676) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:380) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:323) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) ~[kafka-clients-2.5.1.jar:na]
	at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation
	... 10 common frames omitted
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation

2024-12-09 14:16:41.384  INFO 57930 --- [ad | producer-2] o.c.scs.kafka.prod.ErrorChannelHandler   : errorChannel value: {"foo":"bar","bar":2024}

2024-12-09 14:16:41.387 ERROR 57930 --- [ad | producer-2] o.s.integration.handler.LoggingHandler   : org.springframework.integration.kafka.support.KafkaSendFailureException: nested exception is org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation, failedMessage=GenericMessage [payload=byte[24], headers={contentType=application/json, id=0a266fcd-3e72-f14b-808e-bc34db984ee9, timestamp=1733724881355}] [record=ProducerRecord(topic=foo, partition=null, headers=RecordHeaders(headers = [RecordHeader(key = contentType, value = [34, 97, 112, 112, 108, 105, 99, 97, 116, 105, 111, 110, 47, 106, 115, 111, 110, 34]), RecordHeader(key = spring_json_header_types, value = [123, 34, 99, 111, 110, 116, 101, 110, 116, 84, 121, 112, 101, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 125])], isReadOnly = true), key=null, value=[B@48a75811, timestamp=null)]
	at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler$1.onFailure(KafkaProducerMessageHandler.java:614)
	at org.springframework.util.concurrent.ListenableFutureCallbackRegistry.notifyFailure(ListenableFutureCallbackRegistry.java:86)
	at org.springframework.util.concurrent.ListenableFutureCallbackRegistry.failure(ListenableFutureCallbackRegistry.java:158)
	at org.springframework.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:100)
	at org.springframework.util.concurrent.SettableListenableFuture$SettableTask.done(SettableListenableFuture.java:175)
	at java.base/java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:381)
	at java.base/java.util.concurrent.FutureTask.setException(FutureTask.java:250)
	at org.springframework.util.concurrent.SettableListenableFuture$SettableTask.setExceptionResult(SettableListenableFuture.java:163)
	at org.springframework.util.concurrent.SettableListenableFuture.setException(SettableListenableFuture.java:70)
	at org.springframework.kafka.core.KafkaTemplate.lambda$buildCallback$4(KafkaTemplate.java:602)
	at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer$1.onCompletion(DefaultKafkaProducerFactory.java:871)
	at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1356)
	at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231)
	at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197)
	at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:676)
	at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:380)
	at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:323)
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation
	... 10 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation

2024-12-09 14:16:41.389 ERROR 57930 --- [ad | producer-2] o.s.k.support.LoggingProducerListener    : Exception thrown when sending a message with key='null' and payload='{123, 34, 102, 111, 111, 34, 58, 34, 98, 97, 114, 34, 44, 34, 98, 97, 114, 34, 58, 50, 48, 50, 52, 1...' to topic foo:

org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation

2024-12-09 14:16:41.390  INFO 57930 --- [ad | producer-2] o.c.scs.kafka.prod.ErrorChannelHandler   : errorChannel: {}

org.springframework.integration.kafka.support.KafkaSendFailureException: nested exception is org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation
	at org.springframework.integration.kafka.outbound.KafkaProducerMessageHandler$1.onFailure(KafkaProducerMessageHandler.java:614) ~[spring-integration-kafka-3.3.1.RELEASE.jar:3.3.1.RELEASE]
	at org.springframework.util.concurrent.ListenableFutureCallbackRegistry.notifyFailure(ListenableFutureCallbackRegistry.java:86) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.ListenableFutureCallbackRegistry.failure(ListenableFutureCallbackRegistry.java:158) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.ListenableFutureTask.done(ListenableFutureTask.java:100) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.SettableListenableFuture$SettableTask.done(SettableListenableFuture.java:175) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at java.base/java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:381) ~[na:na]
	at java.base/java.util.concurrent.FutureTask.setException(FutureTask.java:250) ~[na:na]
	at org.springframework.util.concurrent.SettableListenableFuture$SettableTask.setExceptionResult(SettableListenableFuture.java:163) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.util.concurrent.SettableListenableFuture.setException(SettableListenableFuture.java:70) ~[spring-core-5.2.15.RELEASE.jar:5.2.15.RELEASE]
	at org.springframework.kafka.core.KafkaTemplate.lambda$buildCallback$4(KafkaTemplate.java:602) ~[spring-kafka-2.5.14.RELEASE.jar:2.5.14.RELEASE]
	at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer$1.onCompletion(DefaultKafkaProducerFactory.java:871) ~[spring-kafka-2.5.14.RELEASE.jar:2.5.14.RELEASE]
	at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1356) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:231) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:197) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:676) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:380) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:323) ~[kafka-clients-2.5.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:239) ~[kafka-clients-2.5.1.jar:na]
	at java.base/java.lang.Thread.run(Thread.java:833) ~[na:na]
Caused by: org.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation
	... 10 common frames omitted
Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 81 record(s) for foo-0:120001 ms has passed since batch creation

消费者批量消费、手动ACK

MyProdSink.class 用于定义input binder的相关信息,其中INPUT的值和application.yaml中的spring.cloud.stream.bindings的值保持一致。

java 复制代码
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;

public interface MyProdSink {

    String INPUT = "my-prod-input";

    @Input(INPUT)
    SubscribableChannel input();

}

消费消息的代码示例:

java 复制代码
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.messaging.handler.annotation.Payload;
import org.springframework.stereotype.Component;

import java.util.List;
import java.util.concurrent.atomic.AtomicInteger;

@Component
@EnableBinding(MyProdSink.class)
public class MyProdConsumer {

    private static final Logger LOGGER = LoggerFactory.getLogger(MyProdConsumer.class);

    private final AtomicInteger counter = new AtomicInteger(0);

    @StreamListener(MyProdSink.INPUT)
    public void consume(
            @Payload List<Object> payloads,
            @Header(KafkaHeaders.RECEIVED_TOPIC) List<String> topics,
            @Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitionIds,
            @Header(KafkaHeaders.GROUP_ID) String groupId,
            @Header(KafkaHeaders.ACKNOWLEDGMENT) Acknowledgment acknowledgment
    ) {
        // payloads、topics、partitionIds对于一条消息的三个数组的下标是一样的
        for (int i = 0; i < payloads.size(); i++) {
            byte[] bytes = (byte[]) payloads.get(i);
            LOGGER.info("payload:{} from topic:{}, partitionId:{}, groupId:{}", new String(bytes), topics.get(i), partitionIds.get(i), groupId);
        }

        // 手动ack
        acknowledgment.acknowledge();
        LOGGER.info("consumer message total:{}", counter.addAndGet(payloads.size()));

    }

}
  1. 消费者:如何确保消息不丢失?

首先关闭spring cloud stream的自动消费提交的配置,即auto-commit-offset 设置为 false,然后所有不符合业务期望的处理都不提交消费offset,即不acknowledgment.acknowledge()。当然除了不ack这种处理方式,也可以将消息写到重试主题中,然后择机消费重试主题。两种方式的选择需要根据业务场景来判断。

  1. 消费者:如何消费多个主题?
  • 如果想通过配置正则表达式来达到消费多个主题的效果,需要先将spring cloud stream的destination-is-pattern 设置为 true,然后再配置destination,例如配置为 test-prod.*-topic
  • 如果既想正则匹配消费多个主题,又想消费另几个主题,需要先将spring cloud stream的destination-is-pattern 设置为 true,然后再配置destination多个主题,多个主题之间使用逗号隔开。例如配置为test-prod.*-topic,foo,bar,hello
  1. 消费者:如何批量消费?

首先将spring cloud stream的batch-mode 设置为 true,然后消费者使用@Payload List<Object> payloads 集合的方式来接收多条消息。

参考

如果这篇文章帮助到了你,欢迎评论、点赞、转发。

相关推荐
c的s32 分钟前
在一台服务器上使用docker运行kafka集群
服务器·docker·kafka
IsToRestart2 小时前
什么是Kafka的重平衡机制?
分布式·kafka
舰长1152 小时前
麒麟服务器安装kafka--亲测
分布式·kafka
武子康2 小时前
大数据-268 实时数仓 - ODS层 将 Kafka 中的维度表写入 DIM
java·大数据·数据库·数据仓库·分布式·mysql·kafka
斯普信专业组11 小时前
KAFKA入门:原理架构解析
架构·kafka
.生产的驴15 小时前
Elasticsearch 创建索引 Mapping映射属性 索引库操作 增删改查
大数据·spring boot·后端·elasticsearch·搜索引擎·spring cloud·全文检索
躲在没风的地方1 天前
spring cloud微服务分布式架构
java·spring boot·spring cloud·微服务·架构
customer081 天前
【开源免费】基于SpringBoot+Vue.JS海滨学院班级回忆录系统(JAVA毕业设计)
java·vue.js·spring boot·后端·spring cloud·maven·intellij-idea