Flink实现kafka到kafka、kafka到doris的精准一次消费

1 流程图

2 Flink来源表建模

sql 复制代码
--来源-城市topic
CREATE TABLE NJ_QL_JC_SSJC_SOURCE (
record string 
) WITH (
	'connector' = 'kafka',
	'topic' = 'QL_JC_SSJC',
	'properties.bootstrap.servers' = '172.*.*.*:9092',
	'properties.group.id' = 'QL_JC_SSJC_NJ_QL_JC_SSJC_SOURCE',
    'scan.startup.mode' = 'group-offsets',
    'properties.isolation.level' = 'read_committed',
    'properties.auto.offset.reset' = 'earliest',
	'format' = 'raw'
);
--来源-中台kafka-topic
CREATE TABLE ODS_QL_JC_SSJC_SOURCE (
sscsdm string,
extract_time TIMESTAMP,
record string
) WITH (
	'connector' = 'kafka',
	'topic' = 'ODS_QL_JC_SSJC',
	'properties.bootstrap.servers' = '172.*.*.*:21007,172.*.*.*:21007,172.*.*.*:21007',
	'properties.security.protocol' = 'SASL_PLAINTEXT',
	'properties.sasl.kerberos.service.name' = 'kafka',
	'properties.kerberos.domain.name' = 'hadoop.hadoop.com',
	'properties.group.id' = 'ODS_QL_JC_SSJC_SOURCE_ODS_QL_JC_SSJC_SOURCE',
	'scan.startup.mode' = 'group-offsets',
   'properties.auto.offset.reset' = 'earliest',
   'properties.isolation.level' = 'read_committed',
   'sink.semantic' = 'exactly-once',
	'format' = 'json'
);

3 Flink去向表建模

sql 复制代码
--去向-中台kafka-topic
CREATE TABLE KAFKA_ODS_QL_JC_SSJC_SINK  (
sscsdm string,
extract_time TIMESTAMP,
record string
) WITH (
	'connector' = 'kafka',
	'topic' = 'ODS_QL_JC_SSJC',
	'properties.bootstrap.servers' = '172.*.*.*:21007,172.*.*.*:21007,172.*.*.*:21007',
	'properties.security.protocol' = 'SASL_PLAINTEXT',
	'properties.sasl.kerberos.service.name' = 'kafka',
	'properties.kerberos.domain.name' = 'hadoop.hadoop.com',
	'format' = 'json', 
   'properties.transaction.timeout.ms' = '900000'
);
--去向-Doris表
CREATE TABLE DORIS_ODS_QL_JC_SSJC_SINK (
	sscsdm STRING,
	extract_time TIMESTAMP,
	record STRING
) WITH (
	'connector' = 'doris',
	'fenodes' = '3.*.*.*:8030,3.*.*.*:8030,3.*.*.*:8030',
	'table.identifier' = 'doris_d.ods_ql_jc_ssjc',
	'username' = 'root',
	'password' = '********',
   'sink.properties.two_phase_commit' = 'true' 
);

4 城市Topic至中台Topic的Flinksql

sql 复制代码
insert into
  KAFKA_ODS_QL_JC_SSJC_SINK
 SELECT
   '320100' as sscsdm,
   CURRENT_TIMESTAMP as extract_time,
   record
 FROM
   NJ_QL_JC_SSJC_SOURCE
 UNION ALL
SELECT
  '320200' as sscsdm,
  CURRENT_TIMESTAMP as extract_time,
  record
FROM
  WX_QL_JC_SSJC_SOURCE
  .
  .
  .
 UNION ALL
 SELECT
   '320583' as sscsdm,
   CURRENT_TIMESTAMP as extract_time,
   record
 FROM
   KS_QL_JC_SSJC_SOURCE

5 中台Topic至Doris的Flinksql

sql 复制代码
insert into DORIS_ODS_QL_JC_SSJC_SINK
SELECT
  sscsdm,
  CURRENT_TIMESTAMP as extract_time,
  record
FROM
  ODS_QL_JC_SSJC_SOURCE   
相关推荐
lifallen1 小时前
Paimon vs. HBase:全链路开销对比
java·大数据·数据结构·数据库·算法·flink·hbase
爱吃面的猫1 小时前
大数据Hadoop之——Hbase下载安装部署
大数据·hadoop·hbase
viperrrrrrrrrr71 小时前
大数据(1)-hdfs&hbase
大数据·hdfs·hbase
茫茫人海一粒沙1 小时前
理解 Confluent Schema Registry:Kafka 生态中的结构化数据守护者
分布式·kafka
拓端研究室3 小时前
专题:2025即时零售与各类人群消费行为洞察报告|附400+份报告PDF、原数据表汇总下载
大数据·人工智能
武子康3 小时前
大数据-30 ZooKeeper Java-API 监听节点 创建、删除节点
大数据·后端·zookeeper
小手WA凉3 小时前
Hadoop之MapReduce
大数据·mapreduce
AgeClub4 小时前
服务600+养老社区,Rendever如何通过“VR+养老”缓解老年孤独?
大数据·人工智能
dessler4 小时前
Kafka-消费者(Consumer)和消费者组(Consumer Group)
linux·运维·kafka
SeaTunnel5 小时前
SeaTunnel 社区月报(5-6 月):全新功能上线、Bug 大扫除、Merge 之星是谁?
大数据·开源·bug·数据集成·seatunnel