flink 消费 kafka subtask 分区策略

在新版本的flink source 采取的是KafkaSourceEnumerator完成分区的分区策略,通过阅读源码发现,真正的分区是下面的代码实现的

复制代码
 private void addPartitionSplitChangeToPendingAssignments(
            Collection<KafkaPartitionSplit> newPartitionSplits) {
        //kafka source 指定的并行度
        int numReaders = context.currentParallelism();
        for (KafkaPartitionSplit split : newPartitionSplits) {
        	//具体的task与kafka分区分配算法
            int ownerReader = getSplitOwner(split.getTopicPartition(), numReaders);
            //存储着task与分区的对应关系
            pendingPartitionSplitAssignment
                    .computeIfAbsent(ownerReader, r -> new HashSet<>())
                    .add(split);
        }
    }

    static int getSplitOwner(TopicPartition tp, int numReaders) {
    	//按照topic name取 startIndex
        int startIndex = ((tp.topic().hashCode() * 31) & 0x7FFFFFFF) % numReaders;
	    //计算分区与task的对应关系
        return (startIndex + tp.partition()) % numReaders;
    }

举例子说明:

有两个topic:test_topic_partition_one, test_topic_partition_two,每个topic有9个分区,kafka source并行度设置为5

复制代码
KafkaSource<String> source = KafkaSource.<String>builder()
        .setBootstrapServers("localhost:9092")
        .setProperties(properties)
        .setTopics("test_topic_partition_one", "test_topic_partition_two")
        .setGroupId("my-group")
        .setStartingOffsets(OffsetsInitializer.latest())
        .setValueOnlyDeserializer(new SimpleStringSchema())
        .build();           
env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source").setParallelism(5)

根据公式: int startIndex = ((tp.topic().hashCode() * 31) & 0x7FFFFFFF) % numReaders;

第一个topic test_topic_partition_onestartIndex = 2

所以各个subtask与分区的对应关系为:

复制代码
subtask 0 ==> test_topic_partition_one-3  test_topic_partition_one-8

subtask 1 ==> test_topic_partition_one-4

subtask 2 ==> test_topic_partition_one-0  test_topic_partition_one-5

subtask 3 ==> test_topic_partition_one-1  test_topic_partition_one-6 

subtask 4 ==> test_topic_partition_one-2  test_topic_partition_one-7

根据公式: int startIndex = ((tp.topic().hashCode() * 31) & 0x7FFFFFFF) % numReaders;

第二个topic test_topic_partition_twostartIndex = 1

所以各个subtask与分区的对应关系为:

复制代码
subtask 0 ==> test_topic_partition_two-4

subtask 1 ==> test_topic_partition_two-0  test_topic_partition_two-5

subtask 2 ==> test_topic_partition_two-1  test_topic_partition_two-6

subtask 3 ==> test_topic_partition_two-2  test_topic_partition_two-7 

subtask 4 ==> test_topic_partition_two-3  test_topic_partition_two-8

所以最终flink每个subtask对应的分区是,所以由于topic的流量不同,可能导致数据倾斜影响数据处理的能力。

复制代码
subtask 0 ==> test_topic_partition_one-3  test_topic_partition_one-8  test_topic_partition_two-4

subtask 1 ==> test_topic_partition_one-4  test_topic_partition_two-0  test_topic_partition_two-5

subtask 2 ==> test_topic_partition_one-0  test_topic_partition_one-5  test_topic_partition_two-1  test_topic_partition_two-6

subtask 3 ==> test_topic_partition_one-1  test_topic_partition_one-6  test_topic_partition_two-2  test_topic_partition_two-7 

subtask 4 ==> test_topic_partition_one-2  test_topic_partition_one-7  test_topic_partition_two-3  test_topic_partition_two-8

对应的日志信息:

复制代码
2024-08-18 18:39:51 INFO [org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator  Line:393] Discovered new partitions: [test_topic_partition_one-6, test_topic_partition_one-7, test_topic_partition_one-8, test_topic_partition_two-4, test_topic_partition_two-5, test_topic_partition_two-6, test_topic_partition_one-0, test_topic_partition_two-7, test_topic_partition_one-1, test_topic_partition_two-8, test_topic_partition_one-2, test_topic_partition_one-3, test_topic_partition_one-4, test_topic_partition_one-5, test_topic_partition_two-0, test_topic_partition_two-1, test_topic_partition_two-2, test_topic_partition_two-3]

2024-08-18 18:39:51 INFO [org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator  Line:353] Assigning splits to readers {0=[[Partition: test_topic_partition_one-3, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-4, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-8, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 1=[[Partition: test_topic_partition_one-4, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-5, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-0, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 2=[[Partition: test_topic_partition_two-6, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-0, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-1, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-5, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 3=[[Partition: test_topic_partition_one-1, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-7, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-2, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-6, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 4=[[Partition: test_topic_partition_two-8, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-2, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-3, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-7, StartingOffset: -1, StoppingOffset: -9223372036854775808]]}
相关推荐
ExiFengs1 小时前
Java使用策略模式实现多实体通用操作的优雅设计
java·开发语言·设计模式·策略模式
magic_kid_20103 小时前
Flink on YARN 依赖/JAR 包问题排查指南
flink·jar·包冲突
是垚不是土3 小时前
单节点部署 Kafka Kraft 集群
分布式·kafka
LF3_4 小时前
Centos7,KRaft模式单机模拟Kafka集群
分布式·kafka·集群·kraft
七夜zippoe4 小时前
分布式事务解决方案(二) 消息队列实现最终一致性
java·kafka·消息队列·rocketmq·2pc
oMcLin4 小时前
如何在Debian 10上配置并调优Apache Kafka集群,支持电商平台的大规模订单处理和消息流管理?
kafka·debian·apache
斯文by累4 小时前
Ubuntu系统上安装Kafka 8.0
linux·ubuntu·kafka
俊哥大数据13 小时前
【项目10】基于Flink房地产领域大数据实时分析系统
大数据·flink
Hello.Reader14 小时前
Flink CEP Pattern API、连续性、跳过策略、超时与迟到数据一篇讲透
大数据·flink
俊哥大数据20 小时前
【项目7】 基于Flink新闻资讯大数据推荐系统
大数据·flink