【Spring连载】使用Spring访问 Apache Kafka(二十一)----提示,技巧和例子Tips, Tricks and Examples
- [一、手动分配所有分区Manually Assigning All Partitions](#一、手动分配所有分区Manually Assigning All Partitions)
- [二、Kafka事务与其他事务管理器的例子Examples of Kafka Transactions with Other Transaction Managers](#二、Kafka事务与其他事务管理器的例子Examples of Kafka Transactions with Other Transaction Managers)
- [三、定制 JsonSerializer 和 JsonDeserializer](#三、定制 JsonSerializer 和 JsonDeserializer)
一、手动分配所有分区Manually Assigning All Partitions
假设你希望始终从所有分区读取所有记录(例如,当使用compacted topic加载分布式缓存时),手动分配分区而不使用Kafka的组管理会很有用。当有很多分区时,这样做可能会很困难,因为必须列出分区。如果分区数量随着时间的推移而变化,这也是一个问题,因为每次分区数量变化时都必须重新编译应用程序。
以下是如何在应用程序启动时使用SpEL表达式的强大功能动态创建分区列表的示例:
java
@KafkaListener(topicPartitions = @TopicPartition(topic = "compacted",
partitions = "#{@finder.partitions('compacted')}"),
partitionOffsets = @PartitionOffset(partition = "*", initialOffset = "0")))
public void listen(@Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key, String payload) {
...
}
@Bean
public PartitionFinder finder(ConsumerFactory<String, String> consumerFactory) {
return new PartitionFinder(consumerFactory);
}
public static class PartitionFinder {
private final ConsumerFactory<String, String> consumerFactory;
public PartitionFinder(ConsumerFactory<String, String> consumerFactory) {
this.consumerFactory = consumerFactory;
}
public String[] partitions(String topic) {
try (Consumer<String, String> consumer = consumerFactory.createConsumer()) {
return consumer.partitionsFor(topic).stream()
.map(pi -> "" + pi.partition())
.toArray(String[]::new);
}
}
}
将其与"ConsumerConfig.AUTO_OFFSET_RESET_CONFIG=earliest"结合使用,将在每次启动应用程序时加载所有记录。你还应该将容器的AckMode设置为MANUAL,以防止容器为null消费者组提交偏移量。但是,从2.5.5版本开始,如上所示,你可以在所有分区应用初始偏移量;有关详细信息,请参阅明确的分区分配。
二、Kafka事务与其他事务管理器的例子Examples of Kafka Transactions with Other Transaction Managers
下面的Spring Boot应用程序是一个数据库和Kafka连锁事务的例子。监听器容器启动Kafka事务,@Transactional注解启动DB事务。首先提交数据库事务;如果Kafka事务提交失败,记录将被重新deliver,因此数据库更新应该是幂等的。
java
@SpringBootApplication
public class Application {
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
@Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> template.executeInTransaction(t -> t.send("topic1", "test"));
}
@Bean
public DataSourceTransactionManager dstm(DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
@Component
public static class Listener {
private final JdbcTemplate jdbcTemplate;
private final KafkaTemplate<String, String> kafkaTemplate;
public Listener(JdbcTemplate jdbcTemplate, KafkaTemplate<String, String> kafkaTemplate) {
this.jdbcTemplate = jdbcTemplate;
this.kafkaTemplate = kafkaTemplate;
}
@KafkaListener(id = "group1", topics = "topic1")
@Transactional("dstm")
public void listen1(String in) {
this.kafkaTemplate.send("topic2", in.toUpperCase());
this.jdbcTemplate.execute("insert into mytable (data) values ('" + in + "')");
}
@KafkaListener(id = "group2", topics = "topic2")
public void listen2(String in) {
System.out.println(in);
}
}
@Bean
public NewTopic topic1() {
return TopicBuilder.name("topic1").build();
}
@Bean
public NewTopic topic2() {
return TopicBuilder.name("topic2").build();
}
}
properties
spring.datasource.url=jdbc:mysql://localhost/integration?serverTimezone=UTC
spring.datasource.username=root
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.properties.isolation.level=read_committed
spring.kafka.producer.transaction-id-prefix=tx-
#logging.level.org.springframework.transaction=trace
#logging.level.org.springframework.kafka.transaction=debug
#logging.level.org.springframework.jdbc=debug
sql
create table mytable (data varchar(20));
对于仅生产者的事务,事务同步将起效:
java
@Transactional("dstm")
public void someMethod(String in) {
this.kafkaTemplate.send("topic2", in.toUpperCase());
this.jdbcTemplate.execute("insert into mytable (data) values ('" + in + "')");
}
KafkaTemplate会将自己的事务与DB的事务同步,提交/回滚发生在数据库的行为之后。如果你希望首先提交Kafka事务,并且只在Kafka事务成功时提交DB事务,可以使用嵌套的@Transactional方法:
java
@Transactional("dstm")
public void someMethod(String in) {
this.jdbcTemplate.execute("insert into mytable (data) values ('" + in + "')");
sendToKafka(in);
}
@Transactional("kafkaTransactionManager")
public void sendToKafka(String in) {
this.kafkaTemplate.send("topic2", in.toUpperCase());
}
三、定制 JsonSerializer 和 JsonDeserializer
序列化器和反序列化器支持许多使用属性的自定义,请参阅JSON了解更多信息。kafka-clients代码,而不是Spring,会实例化这些对象,除非你将它们直接注入消费者和生产者工厂。如果希望使用属性配置序列化器/反序列化器,但又希望使用自定义的ObjectMapper,只需创建一个子类并将自定义映射器传递给super构造函数。例如:
java
public class CustomJsonSerializer extends JsonSerializer<Object> {
public CustomJsonSerializer() {
super(customizedObjectMapper());
}
private static ObjectMapper customizedObjectMapper() {
ObjectMapper mapper = JacksonUtils.enhancedObjectMapper();
mapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
return mapper;
}
}