Kafka-ConsumerRecord

ConsumerRecord 是 Apache Kafka 消费者从主题中读取消息时的核心数据结构,每条消息都会被封装为一个 ConsumerRecord 对象。它包含了消息的元数据(如来源、位置)和实际内容,是消费者处理消息的基础单元。以下是其核心要点:

ConsumerRecord 的结构与字段

每个 ConsumerRecord 包含以下关键信息:

topic:消息所属的 Kafka 主题名称(字符串)

partition:消息所在分区的编号(整数)

offset:消息在分区中的唯一位置标识(长整数),用于追踪消费进度

key:消息的键(泛型),通常用于分区路由或业务标识(如订单ID

value:消息的值(泛型),即实际传输的数据(如 JSON 字符串、二进制数据等)

timestamp:消息的时间戳(长整数),表示消息生成或追加到分区的时间

headers:消息头(键值对集合),用于传递业务元数据(如消息类型、版本号)

checksum:校验和(长整数,已逐步弃用),用于验证消息完整性

在代码中的使用示例

在 Java 或 Spring Kafka 中,ConsumerRecord 通常通过消费者监听器(如 @KafkaListener)接收并处理:

复制代码
@KafkaListener(topics = "orders")
public void handleOrder(ConsumerRecord<String, Order> record) {
    String key = record.key();         // 获取消息键(如订单ID)
    Order value = record.value();     // 获取消息值(如订单对象)
    String topic = record.topic();     // 获取主题名称
    long offset = record.offset();     // 获取消息偏移量
    // 处理业务逻辑...
}

源码

复制代码
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by FernFlower decompiler)
//

package org.apache.kafka.clients.consumer;

import java.util.Optional;
import org.apache.kafka.common.header.Headers;
import org.apache.kafka.common.header.internals.RecordHeaders;
import org.apache.kafka.common.record.DefaultRecord;
import org.apache.kafka.common.record.TimestampType;

public class ConsumerRecord<K, V> {
    public static final long NO_TIMESTAMP = -1L;
    public static final int NULL_SIZE = -1;
    public static final int NULL_CHECKSUM = -1;
    private final String topic;
    private final int partition;
    private final long offset;
    private final long timestamp;
    private final TimestampType timestampType;
    private final int serializedKeySize;
    private final int serializedValueSize;
    private final Headers headers;
    private final K key;
    private final V value;
    private final Optional<Integer> leaderEpoch;
    private volatile Long checksum;

    public ConsumerRecord(String topic, int partition, long offset, K key, V value) {
        this(topic, partition, offset, -1L, TimestampType.NO_TIMESTAMP_TYPE, -1L, -1, -1, key, value);
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, long checksum, int serializedKeySize, int serializedValueSize, K key, V value) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, new RecordHeaders());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers) {
        this(topic, partition, offset, timestamp, timestampType, checksum, serializedKeySize, serializedValueSize, key, value, headers, Optional.empty());
    }

    public ConsumerRecord(String topic, int partition, long offset, long timestamp, TimestampType timestampType, Long checksum, int serializedKeySize, int serializedValueSize, K key, V value, Headers headers, Optional<Integer> leaderEpoch) {
        if (topic == null) {
            throw new IllegalArgumentException("Topic cannot be null");
        } else if (headers == null) {
            throw new IllegalArgumentException("Headers cannot be null");
        } else {
            this.topic = topic;
            this.partition = partition;
            this.offset = offset;
            this.timestamp = timestamp;
            this.timestampType = timestampType;
            this.checksum = checksum;
            this.serializedKeySize = serializedKeySize;
            this.serializedValueSize = serializedValueSize;
            this.key = key;
            this.value = value;
            this.headers = headers;
            this.leaderEpoch = leaderEpoch;
        }
    }

    public String topic() {
        return this.topic;
    }

    public int partition() {
        return this.partition;
    }

    public Headers headers() {
        return this.headers;
    }

    public K key() {
        return this.key;
    }

    public V value() {
        return this.value;
    }

    public long offset() {
        return this.offset;
    }

    public long timestamp() {
        return this.timestamp;
    }

    public TimestampType timestampType() {
        return this.timestampType;
    }

    /** @deprecated */
    @Deprecated
    public long checksum() {
        if (this.checksum == null) {
            this.checksum = DefaultRecord.computePartialChecksum(this.timestamp, this.serializedKeySize, this.serializedValueSize);
        }

        return this.checksum;
    }

    public int serializedKeySize() {
        return this.serializedKeySize;
    }

    public int serializedValueSize() {
        return this.serializedValueSize;
    }

    public Optional<Integer> leaderEpoch() {
        return this.leaderEpoch;
    }

    public String toString() {
        return "ConsumerRecord(topic = " + this.topic + ", partition = " + this.partition + ", leaderEpoch = " + this.leaderEpoch.orElse((Object)null) + ", offset = " + this.offset + ", " + this.timestampType + " = " + this.timestamp + ", serialized key size = " + this.serializedKeySize + ", serialized value size = " + this.serializedValueSize + ", headers = " + this.headers + ", key = " + this.key + ", value = " + this.value + ")";
    }
}
相关推荐
搞不懂语言的程序员6 小时前
Kafka Consumer的auto.offset.reset参数有哪些配置?适用场景?
分布式·kafka
洋芋爱吃芋头6 小时前
hadoop中的序列化和反序列化(2)
大数据·hadoop·分布式
sunshine_sean6 小时前
docker 部署kafka命令
docker·容器·kafka
不会飞的鲨鱼7 小时前
Scrapy框架之Scrapyd部署及Gerapy分布式爬虫管理框架的使用
分布式·爬虫·scrapy
搞不懂语言的程序员8 小时前
设计一个分布式系统:要求全局消息顺序,如何使用Kafka实现?
分布式·kafka
鱼鱼不愚与8 小时前
处理 Clickhouse 内存溢出
数据库·分布式·clickhouse
玄武后端技术栈9 小时前
RabbitMQ中Exchange交换器的类型
分布式·rabbitmq
玄武后端技术栈9 小时前
RabbitMQ如何保证消息不丢失?
分布式·rabbitmq·ruby
小健学 Java10 小时前
Kafka 与 RabbitMQ、RocketMQ 有何不同?
kafka·rabbitmq·rocketmq
Aric_Jones10 小时前
FastDFS,分布式文件存储系统,介绍+配置+工具类
java·数据库·redis·分布式·idea·dfs