Kafka-ConsumerRecord

ConsumerRecord 是 Apache Kafka 消费者从主题中读取消息时的核心数据结构,每条消息都会被封装为一个 ConsumerRecord 对象。它包含了消息的元数据(如来源、位置)和实际内容,是消费者处理消息的基础单元。以下是其核心要点:

ConsumerRecord 的结构与字段

每个 ConsumerRecord 包含以下关键信息:

topic:消息所属的 Kafka 主题名称(字符串)

partition:消息所在分区的编号(整数)

offset:消息在分区中的唯一位置标识(长整数),用于追踪消费进度

key:消息的键(泛型),通常用于分区路由或业务标识(如订单ID

value:消息的值(泛型),即实际传输的数据(如 JSON 字符串、二进制数据等)

timestamp:消息的时间戳(长整数),表示消息生成或追加到分区的时间

headers:消息头(键值对集合),用于传递业务元数据(如消息类型、版本号)

checksum:校验和(长整数,已逐步弃用),用于验证消息完整性

在代码中的使用示例

在 Java 或 Spring Kafka 中,ConsumerRecord 通常通过消费者监听器(如 @KafkaListener)接收并处理:

复制代码
@KafkaListener(topics = "orders")
public void handleOrder(ConsumerRecord<String, Order> record) {
    String key = record.key();         // 获取消息键(如订单ID)
    Order value = record.value();     // 获取消息值(如订单对象)
    String topic = record.topic();     // 获取主题名称
    long offset = record.offset();     // 获取消息偏移量
    // 处理业务逻辑...
}

源码

复制代码
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//

package org.apache.kafka.clients.consumer;

import java.util.Optional;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.common.record.DefaultRecord;
import org.apache.kafka.common.record.TimestampType;

public class ConsumerRecord<K, V> {
    public static final long NO_TIMESTAMP = -1L;
    public static final int NULL_SIZE = -1;
    public static final int NULL_CHECKSUM = -1;
    private final String topic;
    private final int partition;
    private final long offset;
    private final long timestamp;
    private final TimestampType timestampType;
    private final int serializedKeySize;
    private final int serializedValueSize;
    private final Headers headers;
    private final K key;
    private final V value;
    private final Optional<Integer> leaderEpoch;
    private volatile Long checksum;

    public ConsumerRecord(String topic, int partition, long offset, K key, V value) {
        this(topic, partition, offset, -1L, TimestampType.NO_TIMESTAMP_TYPE, -1L, -1, -1, key, value);
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, long checksum, int serializedKeySize, int serializedValueSize, K key, V value) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, new RecordHeaders());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, headers, Optional.empty());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers, Optional<Integer> leaderEpoch) {
        if (topic == null) {
            throw new IllegalArgumentException("Topic cannot be null");
        } else if (headers == null) {
            throw new IllegalArgumentException("Headers cannot be null");
        } else {
            this.topic = topic;
            this.partition = partition;
            this.offset = offset;
            this.timestamp = timestamp;
            this.timestampType = timestampType;
            this.checksum = checksum;
            this.serializedKeySize = serializedKeySize;
            this.serializedValueSize = serializedValueSize;
            this.key = key;
            this.value = value;
            this.headers = headers;
            this.leaderEpoch = leaderEpoch;
        }
    }

    public String topic() {
        return this.topic;
    }

    public int partition() {
        return this.partition;
    }

    public Headers headers() {
        return this.headers;
    }

    public K key() {
        return this.key;
    }

    public V value() {
        return this.value;
    }

    public long offset() {
        return this.offset;
    }

    public long timestamp() {
        return this.timestamp;
    }

    public TimestampType timestampType() {
        return this.timestampType;
    }

    /** @deprecated */
    @Deprecated
    public long checksum() {
        if (this.checksum == null) {
            this.checksum = DefaultRecord.computePartialChecksum(this.timestamp, this.serializedKeySize, this.serializedValueSize);
        }

        return this.checksum;
    }

    public int serializedKeySize() {
        return this.serializedKeySize;
    }

    public int serializedValueSize() {
        return this.serializedValueSize;
    }

    public Optional<Integer> leaderEpoch() {
        return this.leaderEpoch;
    }

    public String toString() {
        return "ConsumerRecord(topic = " + this.topic + ", partition = " + this.partition + ", leaderEpoch = " + this.leaderEpoch.orElse((Object)null) + ", offset = " + this.offset + ", " + this.timestampType + " = " + this.timestamp + ", serialized key size = " + this.serializedKeySize + ", serialized value size = " + this.serializedValueSize + ", headers = " + this.headers + ", key = " + this.key + ", value = " + this.value + ")";
    }
}
相关推荐
兜兜风d'10 小时前
RabbitMQ TTL机制详解
分布式·rabbitmq·ruby
9ilk11 小时前
【仿RabbitMQ的发布订阅式消息队列】--- 介绍
linux·笔记·分布式·后端·rabbitmq
百锦再1 天前
破茧成蝶:全方位解析Java学习难点与征服之路
java·python·学习·struts·kafka·maven·intellij-idea
sniper_fandc1 天前
Elasticsearch从入门到进阶——分布式特性
大数据·分布式·elasticsearch
hzp6661 天前
spark动态分区参数spark.sql.sources.partitionOverwriteMode
大数据·hive·分布式·spark·etl·partitionover
幼儿园老大*1 天前
什么是分布式数据库?有什么优势?
数据库·分布式
武子康1 天前
大数据-135 ClickHouse 集群连通性自检 + 数据类型避坑实战|10 分钟跑通 ON CLUSTER
大数据·分布式·后端
有谁看见我的剑了?2 天前
Rocky 9 安装 Elasticsearch分布式集群
分布式·elasticsearch·jenkins
你总是一副不开心的样子(´ . .̫ .2 天前
消息队列Kafka
分布式·kafka
wending-Y2 天前
如何正确理解flink 消费kafka时的watermark
flink·kafka·linq