Kafka-ConsumerRecord

ConsumerRecord 是 Apache Kafka 消费者从主题中读取消息时的核心数据结构,每条消息都会被封装为一个 ConsumerRecord 对象。它包含了消息的元数据(如来源、位置)和实际内容,是消费者处理消息的基础单元。以下是其核心要点:

ConsumerRecord 的结构与字段

每个 ConsumerRecord 包含以下关键信息:

topic:消息所属的 Kafka 主题名称(字符串)

partition:消息所在分区的编号(整数)

offset:消息在分区中的唯一位置标识(长整数),用于追踪消费进度

key:消息的键(泛型),通常用于分区路由或业务标识(如订单ID

value:消息的值(泛型),即实际传输的数据(如 JSON 字符串、二进制数据等)

timestamp:消息的时间戳(长整数),表示消息生成或追加到分区的时间

headers:消息头(键值对集合),用于传递业务元数据(如消息类型、版本号)

checksum:校验和(长整数,已逐步弃用),用于验证消息完整性

在代码中的使用示例

在 Java 或 Spring Kafka 中,ConsumerRecord 通常通过消费者监听器(如 @KafkaListener)接收并处理:

复制代码
@KafkaListener(topics = "orders")
public void handleOrder(ConsumerRecord<String, Order> record) {
    String key = record.key();         // 获取消息键(如订单ID)
    Order value = record.value();     // 获取消息值(如订单对象)
    String topic = record.topic();     // 获取主题名称
    long offset = record.offset();     // 获取消息偏移量
    // 处理业务逻辑...
}

源码

复制代码
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//

package org.apache.kafka.clients.consumer;

import java.util.Optional;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.common.record.DefaultRecord;
import org.apache.kafka.common.record.TimestampType;

public class ConsumerRecord<K, V> {
    public static final long NO_TIMESTAMP = -1L;
    public static final int NULL_SIZE = -1;
    public static final int NULL_CHECKSUM = -1;
    private final String topic;
    private final int partition;
    private final long offset;
    private final long timestamp;
    private final TimestampType timestampType;
    private final int serializedKeySize;
    private final int serializedValueSize;
    private final Headers headers;
    private final K key;
    private final V value;
    private final Optional<Integer> leaderEpoch;
    private volatile Long checksum;

    public ConsumerRecord(String topic, int partition, long offset, K key, V value) {
        this(topic, partition, offset, -1L, TimestampType.NO_TIMESTAMP_TYPE, -1L, -1, -1, key, value);
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, long checksum, int serializedKeySize, int serializedValueSize, K key, V value) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, new RecordHeaders());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, headers, Optional.empty());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers, Optional<Integer> leaderEpoch) {
        if (topic == null) {
            throw new IllegalArgumentException("Topic cannot be null");
        } else if (headers == null) {
            throw new IllegalArgumentException("Headers cannot be null");
        } else {
            this.topic = topic;
            this.partition = partition;
            this.offset = offset;
            this.timestamp = timestamp;
            this.timestampType = timestampType;
            this.checksum = checksum;
            this.serializedKeySize = serializedKeySize;
            this.serializedValueSize = serializedValueSize;
            this.key = key;
            this.value = value;
            this.headers = headers;
            this.leaderEpoch = leaderEpoch;
        }
    }

    public String topic() {
        return this.topic;
    }

    public int partition() {
        return this.partition;
    }

    public Headers headers() {
        return this.headers;
    }

    public K key() {
        return this.key;
    }

    public V value() {
        return this.value;
    }

    public long offset() {
        return this.offset;
    }

    public long timestamp() {
        return this.timestamp;
    }

    public TimestampType timestampType() {
        return this.timestampType;
    }

    /** @deprecated */
    @Deprecated
    public long checksum() {
        if (this.checksum == null) {
            this.checksum = DefaultRecord.computePartialChecksum(this.timestamp, this.serializedKeySize, this.serializedValueSize);
        }

        return this.checksum;
    }

    public int serializedKeySize() {
        return this.serializedKeySize;
    }

    public int serializedValueSize() {
        return this.serializedValueSize;
    }

    public Optional<Integer> leaderEpoch() {
        return this.leaderEpoch;
    }

    public String toString() {
        return "ConsumerRecord(topic = " + this.topic + ", partition = " + this.partition + ", leaderEpoch = " + this.leaderEpoch.orElse((Object)null) + ", offset = " + this.offset + ", " + this.timestampType + " = " + this.timestamp + ", serialized key size = " + this.serializedKeySize + ", serialized value size = " + this.serializedValueSize + ", headers = " + this.headers + ", key = " + this.key + ", value = " + this.value + ")";
    }
}
相关推荐
2501_941148159 小时前
从边缘节点到云端协同的分布式缓存一致性实现原理实践解析与多语言代码示例分享笔记集录稿
笔记·分布式·物联网·缓存
回家路上绕了弯11 小时前
分布式事务SAGA模式详解:长事务与复杂流程的柔性事务方案
分布式·后端
Gofarlic_oms112 小时前
集中式 vs 分布式许可:跨地域企业的管控架构选择
大数据·运维·人工智能·分布式·架构·数据挖掘·需求分析
神秘面具男0314 小时前
ceph分布式存储
分布式·ceph
北亚数据恢复14 小时前
VSAN分布式存储下非正常关机导致的虚拟机磁盘丢失如何恢复数据?
分布式·数据恢复·服务器数据恢复·北亚数据恢复·vsan数据恢复
xiaoshujiaa15 小时前
Java大厂面试实录:谢飞机硬刚互联网医疗微服务架构,Spring Cloud+Redis+Kafka全踩坑
spring boot·redis·微服务·kafka·flyway·java面试·互联网医疗
阎*水15 小时前
Ceph 分布式存储完整实践指南
linux·运维·分布式·ceph
yours_Gabriel15 小时前
【kafka】基本概念
分布式·中间件·kafka
柒.梧.16 小时前
MyBatis一对一关联查询深度解析:大实体类、SQL99联表、分布式查询实践
分布式·mybatis
Wang's Blog17 小时前
Kafka: Admin 客户端操作指南之主题管理与集群监控
分布式·kafka