Kafka-ConsumerRecord

ConsumerRecord 是 Apache Kafka 消费者从主题中读取消息时的核心数据结构,每条消息都会被封装为一个 ConsumerRecord 对象。它包含了消息的元数据(如来源、位置)和实际内容,是消费者处理消息的基础单元。以下是其核心要点:

ConsumerRecord 的结构与字段

每个 ConsumerRecord 包含以下关键信息:

topic:消息所属的 Kafka 主题名称(字符串)

partition:消息所在分区的编号(整数)

offset:消息在分区中的唯一位置标识(长整数),用于追踪消费进度

key:消息的键(泛型),通常用于分区路由或业务标识(如订单ID

value:消息的值(泛型),即实际传输的数据(如 JSON 字符串、二进制数据等)

timestamp:消息的时间戳(长整数),表示消息生成或追加到分区的时间

headers:消息头(键值对集合),用于传递业务元数据(如消息类型、版本号)

checksum:校验和(长整数,已逐步弃用),用于验证消息完整性

在代码中的使用示例

在 Java 或 Spring Kafka 中,ConsumerRecord 通常通过消费者监听器(如 @KafkaListener)接收并处理:

复制代码
@KafkaListener(topics = "orders")
public void handleOrder(ConsumerRecord<String, Order> record) {
    String key = record.key();         // 获取消息键(如订单ID)
    Order value = record.value();     // 获取消息值(如订单对象)
    String topic = record.topic();     // 获取主题名称
    long offset = record.offset();     // 获取消息偏移量
    // 处理业务逻辑...
}

源码

复制代码
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//

package org.apache.kafka.clients.consumer;

import java.util.Optional;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.common.record.DefaultRecord;
import org.apache.kafka.common.record.TimestampType;

public class ConsumerRecord<K, V> {
    public static final long NO_TIMESTAMP = -1L;
    public static final int NULL_SIZE = -1;
    public static final int NULL_CHECKSUM = -1;
    private final String topic;
    private final int partition;
    private final long offset;
    private final long timestamp;
    private final TimestampType timestampType;
    private final int serializedKeySize;
    private final int serializedValueSize;
    private final Headers headers;
    private final K key;
    private final V value;
    private final Optional<Integer> leaderEpoch;
    private volatile Long checksum;

    public ConsumerRecord(String topic, int partition, long offset, K key, V value) {
        this(topic, partition, offset, -1L, TimestampType.NO_TIMESTAMP_TYPE, -1L, -1, -1, key, value);
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, long checksum, int serializedKeySize, int serializedValueSize, K key, V value) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, new RecordHeaders());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, headers, Optional.empty());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers, Optional<Integer> leaderEpoch) {
        if (topic == null) {
            throw new IllegalArgumentException("Topic cannot be null");
        } else if (headers == null) {
            throw new IllegalArgumentException("Headers cannot be null");
        } else {
            this.topic = topic;
            this.partition = partition;
            this.offset = offset;
            this.timestamp = timestamp;
            this.timestampType = timestampType;
            this.checksum = checksum;
            this.serializedKeySize = serializedKeySize;
            this.serializedValueSize = serializedValueSize;
            this.key = key;
            this.value = value;
            this.headers = headers;
            this.leaderEpoch = leaderEpoch;
        }
    }

    public String topic() {
        return this.topic;
    }

    public int partition() {
        return this.partition;
    }

    public Headers headers() {
        return this.headers;
    }

    public K key() {
        return this.key;
    }

    public V value() {
        return this.value;
    }

    public long offset() {
        return this.offset;
    }

    public long timestamp() {
        return this.timestamp;
    }

    public TimestampType timestampType() {
        return this.timestampType;
    }

    /** @deprecated */
    @Deprecated
    public long checksum() {
        if (this.checksum == null) {
            this.checksum = DefaultRecord.computePartialChecksum(this.timestamp, this.serializedKeySize, this.serializedValueSize);
        }

        return this.checksum;
    }

    public int serializedKeySize() {
        return this.serializedKeySize;
    }

    public int serializedValueSize() {
        return this.serializedValueSize;
    }

    public Optional<Integer> leaderEpoch() {
        return this.leaderEpoch;
    }

    public String toString() {
        return "ConsumerRecord(topic = " + this.topic + ", partition = " + this.partition + ", leaderEpoch = " + this.leaderEpoch.orElse((Object)null) + ", offset = " + this.offset + ", " + this.timestampType + " = " + this.timestamp + ", serialized key size = " + this.serializedKeySize + ", serialized value size = " + this.serializedValueSize + ", headers = " + this.headers + ", key = " + this.key + ", value = " + this.value + ")";
    }
}
相关推荐
快乐吃手手 : )6 小时前
RabbitMQ(补档)
分布式·rabbitmq
别说我什么都不会7 小时前
OpenHarmony源码分析之分布式软总线:authmanager模块(1)/设备认证连接管理
分布式·操作系统·harmonyos
青云交9 小时前
Java 大视界 -- 基于 Java 的大数据分布式存储系统的数据备份与恢复策略(139)
java·大数据·分布式·数据恢复·数据备份·分布式存储·并行处理
Match_h10 小时前
Debezium + Kafka-connect 实现Postgres实时同步Hologres
kafka·debezium·数据库实时同步·kafka-connect
别说我什么都不会10 小时前
OpenHarmony源码分析之分布式软总线:os_adapter模块解析
分布式·harmonyos
Hover_Z_快跑12 小时前
RabbitMQ 集群降配
分布式·rabbitmq
nlog3n13 小时前
RabbitMQ八股文
分布式·rabbitmq
熏鱼的小迷弟Liu13 小时前
【RabbitMQ】RabbitMQ中死信交换机是什么?延迟队列呢?有哪些应用场景?
分布式·rabbitmq·ruby
熏鱼的小迷弟Liu13 小时前
【RabbitMQ】RabbitMQ如何保证消息不丢失?
分布式·rabbitmq
Gold Steps.13 小时前
Zookeeper 集群部署与管理实践
linux·分布式·zookeeper·云原生