Kafka-ConsumerRecord

ConsumerRecord 是 Apache Kafka 消费者从主题中读取消息时的核心数据结构,每条消息都会被封装为一个 ConsumerRecord 对象。它包含了消息的元数据(如来源、位置)和实际内容,是消费者处理消息的基础单元。以下是其核心要点:

ConsumerRecord 的结构与字段

每个 ConsumerRecord 包含以下关键信息:

topic:消息所属的 Kafka 主题名称(字符串)

partition:消息所在分区的编号(整数)

offset:消息在分区中的唯一位置标识(长整数),用于追踪消费进度

key:消息的键(泛型),通常用于分区路由或业务标识(如订单ID

value:消息的值(泛型),即实际传输的数据(如 JSON 字符串、二进制数据等)

timestamp:消息的时间戳(长整数),表示消息生成或追加到分区的时间

headers:消息头(键值对集合),用于传递业务元数据(如消息类型、版本号)

checksum:校验和(长整数,已逐步弃用),用于验证消息完整性

在代码中的使用示例

在 Java 或 Spring Kafka 中,ConsumerRecord 通常通过消费者监听器(如 @KafkaListener)接收并处理:

复制代码
@KafkaListener(topics = "orders")
public void handleOrder(ConsumerRecord<String, Order> record) {
    String key = record.key();         // 获取消息键(如订单ID)
    Order value = record.value();     // 获取消息值(如订单对象)
    String topic = record.topic();     // 获取主题名称
    long offset = record.offset();     // 获取消息偏移量
    // 处理业务逻辑...
}

源码

复制代码
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//

package org.apache.kafka.clients.consumer;

import java.util.Optional;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.common.record.DefaultRecord;
import org.apache.kafka.common.record.TimestampType;

public class ConsumerRecord<K, V> {
    public static final long NO_TIMESTAMP = -1L;
    public static final int NULL_SIZE = -1;
    public static final int NULL_CHECKSUM = -1;
    private final String topic;
    private final int partition;
    private final long offset;
    private final long timestamp;
    private final TimestampType timestampType;
    private final int serializedKeySize;
    private final int serializedValueSize;
    private final Headers headers;
    private final K key;
    private final V value;
    private final Optional<Integer> leaderEpoch;
    private volatile Long checksum;

    public ConsumerRecord(String topic, int partition, long offset, K key, V value) {
        this(topic, partition, offset, -1L, TimestampType.NO_TIMESTAMP_TYPE, -1L, -1, -1, key, value);
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, long checksum, int serializedKeySize, int serializedValueSize, K key, V value) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, new RecordHeaders());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, headers, Optional.empty());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers, Optional<Integer> leaderEpoch) {
        if (topic == null) {
            throw new IllegalArgumentException("Topic cannot be null");
        } else if (headers == null) {
            throw new IllegalArgumentException("Headers cannot be null");
        } else {
            this.topic = topic;
            this.partition = partition;
            this.offset = offset;
            this.timestamp = timestamp;
            this.timestampType = timestampType;
            this.checksum = checksum;
            this.serializedKeySize = serializedKeySize;
            this.serializedValueSize = serializedValueSize;
            this.key = key;
            this.value = value;
            this.headers = headers;
            this.leaderEpoch = leaderEpoch;
        }
    }

    public String topic() {
        return this.topic;
    }

    public int partition() {
        return this.partition;
    }

    public Headers headers() {
        return this.headers;
    }

    public K key() {
        return this.key;
    }

    public V value() {
        return this.value;
    }

    public long offset() {
        return this.offset;
    }

    public long timestamp() {
        return this.timestamp;
    }

    public TimestampType timestampType() {
        return this.timestampType;
    }

    /** @deprecated */
    @Deprecated
    public long checksum() {
        if (this.checksum == null) {
            this.checksum = DefaultRecord.computePartialChecksum(this.timestamp, this.serializedKeySize, this.serializedValueSize);
        }

        return this.checksum;
    }

    public int serializedKeySize() {
        return this.serializedKeySize;
    }

    public int serializedValueSize() {
        return this.serializedValueSize;
    }

    public Optional<Integer> leaderEpoch() {
        return this.leaderEpoch;
    }

    public String toString() {
        return "ConsumerRecord(topic = " + this.topic + ", partition = " + this.partition + ", leaderEpoch = " + this.leaderEpoch.orElse((Object)null) + ", offset = " + this.offset + ", " + this.timestampType + " = " + this.timestamp + ", serialized key size = " + this.serializedKeySize + ", serialized value size = " + this.serializedValueSize + ", headers = " + this.headers + ", key = " + this.key + ", value = " + this.value + ")";
    }
}
相关推荐
RainbowSea24 分钟前
秒杀/高并发解决方案+落地实现 (技术栈: SpringBoot+Mysql + Redis +RabbitMQ +MyBatis-Plus +Maven-5
java·spring boot·分布式
2301_8035545230 分钟前
从单机到集群,再到分布式,再到微服务
分布式·微服务·架构
北漂老男孩2 小时前
Hadoop 大数据启蒙:深入解析分布式基石 HDFS
大数据·hadoop·分布式·hdfs·学习方法
重生之我要当java大帝2 小时前
谷粒商城-分布式微服务项目-高级篇[三]
分布式·微服务·架构
英英_2 小时前
mysql分布式教程
数据库·分布式·mysql
黄暄3 小时前
分布式锁优化:使用Lua脚本保证释放锁的原子性问题
java·redis·分布式·后端·junit·lua
A尘埃3 小时前
Kafka集成Flume/Spark/Flink(大数据)/SpringBoot
大数据·kafka·flume·集成
karatttt4 小时前
用go从零构建写一个RPC(4)--gonet网络框架重构+聚集发包
网络·分布式·rpc·架构·golang
weixin_472339465 小时前
StarRocks部署方案详解:从单机到分布式集群
分布式
MyikJ14 小时前
Java 面试实录:从Spring到微服务的技术探讨
java·spring boot·微服务·kafka·spring security·grafana·prometheus