flink 消费 kafka subtask 分区策略

在新版本的flink source 采取的是KafkaSourceEnumerator完成分区的分区策略,通过阅读源码发现,真正的分区是下面的代码实现的

复制代码
 private void addPartitionSplitChangeToPendingAssignments(
            Collection<KafkaPartitionSplit> newPartitionSplits) {
        //kafka source 指定的并行度
        int numReaders = context.currentParallelism();
        for (KafkaPartitionSplit split : newPartitionSplits) {
        	//具体的task与kafka分区分配算法
            int ownerReader = getSplitOwner(split.getTopicPartition(), numReaders);
            //存储着task与分区的对应关系
            pendingPartitionSplitAssignment
                    .computeIfAbsent(ownerReader, r -> new HashSet<>())
                    .add(split);
        }
    }

    static int getSplitOwner(TopicPartition tp, int numReaders) {
    	//按照topic name取 startIndex
        int startIndex = ((tp.topic().hashCode() * 31) & 0x7FFFFFFF) % numReaders;
	    //计算分区与task的对应关系
        return (startIndex + tp.partition()) % numReaders;
    }

举例子说明:

有两个topic:test_topic_partition_one, test_topic_partition_two,每个topic有9个分区,kafka source并行度设置为5

复制代码
KafkaSource<String> source = KafkaSource.<String>builder()
        .setBootstrapServers("localhost:9092")
        .setProperties(properties)
        .setTopics("test_topic_partition_one", "test_topic_partition_two")
        .setGroupId("my-group")
        .setStartingOffsets(OffsetsInitializer.latest())
        .setValueOnlyDeserializer(new SimpleStringSchema())
        .build();           
env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source").setParallelism(5)

根据公式: int startIndex = ((tp.topic().hashCode() * 31) & 0x7FFFFFFF) % numReaders;

第一个topic test_topic_partition_onestartIndex = 2

所以各个subtask与分区的对应关系为:

复制代码
subtask 0 ==> test_topic_partition_one-3  test_topic_partition_one-8

subtask 1 ==> test_topic_partition_one-4

subtask 2 ==> test_topic_partition_one-0  test_topic_partition_one-5

subtask 3 ==> test_topic_partition_one-1  test_topic_partition_one-6 

subtask 4 ==> test_topic_partition_one-2  test_topic_partition_one-7

根据公式: int startIndex = ((tp.topic().hashCode() * 31) & 0x7FFFFFFF) % numReaders;

第二个topic test_topic_partition_twostartIndex = 1

所以各个subtask与分区的对应关系为:

复制代码
subtask 0 ==> test_topic_partition_two-4

subtask 1 ==> test_topic_partition_two-0  test_topic_partition_two-5

subtask 2 ==> test_topic_partition_two-1  test_topic_partition_two-6

subtask 3 ==> test_topic_partition_two-2  test_topic_partition_two-7 

subtask 4 ==> test_topic_partition_two-3  test_topic_partition_two-8

所以最终flink每个subtask对应的分区是,所以由于topic的流量不同,可能导致数据倾斜影响数据处理的能力。

复制代码
subtask 0 ==> test_topic_partition_one-3  test_topic_partition_one-8  test_topic_partition_two-4

subtask 1 ==> test_topic_partition_one-4  test_topic_partition_two-0  test_topic_partition_two-5

subtask 2 ==> test_topic_partition_one-0  test_topic_partition_one-5  test_topic_partition_two-1  test_topic_partition_two-6

subtask 3 ==> test_topic_partition_one-1  test_topic_partition_one-6  test_topic_partition_two-2  test_topic_partition_two-7 

subtask 4 ==> test_topic_partition_one-2  test_topic_partition_one-7  test_topic_partition_two-3  test_topic_partition_two-8

对应的日志信息:

复制代码
2024-08-18 18:39:51 INFO [org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator  Line:393] Discovered new partitions: [test_topic_partition_one-6, test_topic_partition_one-7, test_topic_partition_one-8, test_topic_partition_two-4, test_topic_partition_two-5, test_topic_partition_two-6, test_topic_partition_one-0, test_topic_partition_two-7, test_topic_partition_one-1, test_topic_partition_two-8, test_topic_partition_one-2, test_topic_partition_one-3, test_topic_partition_one-4, test_topic_partition_one-5, test_topic_partition_two-0, test_topic_partition_two-1, test_topic_partition_two-2, test_topic_partition_two-3]

2024-08-18 18:39:51 INFO [org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator  Line:353] Assigning splits to readers {0=[[Partition: test_topic_partition_one-3, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-4, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-8, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 1=[[Partition: test_topic_partition_one-4, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-5, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-0, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 2=[[Partition: test_topic_partition_two-6, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-0, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-1, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-5, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 3=[[Partition: test_topic_partition_one-1, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-7, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-2, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-6, StartingOffset: -1, StoppingOffset: -9223372036854775808]], 4=[[Partition: test_topic_partition_two-8, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-2, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_two-3, StartingOffset: -1, StoppingOffset: -9223372036854775808], [Partition: test_topic_partition_one-7, StartingOffset: -1, StoppingOffset: -9223372036854775808]]}
相关推荐
wudl556614 小时前
Flink 1.20 自定义SQL连接器实战
大数据·sql·flink
冰糖拌面16 小时前
CRLF行结束符问题
策略模式
在未来等你1 天前
Kafka面试精讲 Day 25:Kafka与大数据生态集成
大数据·分布式·面试·kafka·消息队列
想ai抽1 天前
Flink中的Lookup join和Temporal join 的语法是一样的吗?
java·大数据·flink
阿里云大数据AI技术2 天前
云栖实录 | 理想汽车基于 Hologres + Flink 构建万亿级车联网信号实时分析平台
数据分析·flink
好玩的Matlab(NCEPU)2 天前
消息队列RabbitMQ、Kafka、ActiveMQ 、Redis、 ZeroMQ、Apache Pulsar对比和如何使用
kafka·rabbitmq·activemq
想ai抽2 天前
Flink的checkpoint interval与mini-batch什么区别?
大数据·flink·batch
在未来等你2 天前
Kafka面试精讲 Day 29:版本升级与平滑迁移
大数据·分布式·面试·kafka·消息队列
在未来等你2 天前
Kafka面试精讲 Day 30:Kafka面试真题解析与答题技巧
大数据·分布式·面试·kafka·消息队列
教练、我想打篮球2 天前
12 pyflink 的一个基础使用, 以及环境相关
python·flink·pyflink