FlinkSQL之保序任务对于聚合SQL影响分析

​ 本文以一个示例说明FlinkSQL如何针对上游乱序数据源设计保序任务,从而保证下游数据准确性。废话不多说,这里以交易数据场景为例.

  • 数据表结构为:

    sql 复制代码
    create table tbl_order_source(
        order_id            int             comment '订单ID',
        shop_id             int             comment '书店ID',
        user_id             int             comment '用户ID',
        original_price      double          comment '原始交易额',
        create_time         timestamp(3)    comment '创建时间: yyyy-MM-dd HH:mm:ss',
        watermark for create_time as create_time - interval '0' second
    )with(
        'connector' = 'kafka',
        'topic' = 'tbl_order_source',
        'properties.bootstrap.servers' = 'localhost:9092',
        'properties.group.id' = 'testGroup',
        'scan.startup.mode' = 'latest-offset',
        'format' = 'json',
        'json.fail-on-missing-field' = 'false',
        'json.ignore-parse-errors' = 'true'
    );
  • 乱序数据源如下:

    json 复制代码
    {"order_id":"1","shop_id":"1","user_id":"1","original_price":"1","create_time":"2024-01-01 20:05:00"}
    {"order_id":"2","shop_id":"1","user_id":"2","original_price":"2","create_time":"2024-01-01 20:04:00"}
    {"order_id":"1","shop_id":"1","user_id":"1","original_price":"3","create_time":"2024-01-01 20:03:00"}
    {"order_id":"3","shop_id":"1","user_id":"3","original_price":"4","create_time":"2024-01-01 20:02:00"}
    {"order_id":"1","shop_id":"1","user_id":"1","original_price":"5","create_time":"2024-01-01 20:04:00"}
  • 我们针对乱序数据源消费首先设计保序任务,如下:

    sql 复制代码
    -- 保序数据中间结果 
    create table ods_order_source(
        order_id            int             comment '订单ID',
        shop_id             int             comment '书店ID',
        user_id             int             comment '用户ID',
        original_price      double          comment '订单金额',
        create_time         timestamp(3)    comment '创建时间: yyyy-MM-dd HH:mm:ss',
        watermark for create_time as create_time - interval '0' second,
        primary key (order_id) not enforced
    )with(
        'connector' = 'upsert-kafka',
        'topic' = 'ods_order_source',
        'properties.bootstrap.servers' = 'localhost:9092',
        'key.format' = 'json',
        'key.json.ignore-parse-errors' = 'true',
        'value.format' = 'json',
        'value.json.fail-on-missing-field' = 'false'
    );
    
    -- 源到保序结果ETL
    insert into ods_order_source
    select
    	tmp.order_id,
    	tmp.shop_id,
    	tmp.user_id,
    	tmp.original_price,
    	tmp.create_time
    from (
    	select
    	    t.order_id,
    	    t.shop_id,
    	    t.user_id,
    	    t.original_price,
    	    t.create_time,
    		row_number()over(partition by t.order_id order by t.create_time asc) as rn
    	from tbl_order_source t
    ) tmp
    where tmp.rn = 1
    ;
    
    -- 查询保序中间结果数据
    select * from ods_order_source;

    针对数据源输入,保序任务输出为:

    json 复制代码
    +I {"order_id":"1","shop_id":"1","user_id":"1","original_price":"1","create_time":"2024-01-01 20:05:00.000"}
    +I {"order_id":"2","shop_id":"1","user_id":"2","original_price":"2","create_time":"2024-01-01 20:04:00.000"}
    -U {"order_id":"1","shop_id":"1","user_id":"1","original_price":"1","create_time":"2024-01-01 20:05:00.000"}
    +U {"order_id":"1","shop_id":"1","user_id":"1","original_price":"3","create_time":"2024-01-01 20:03:00.000"}
    +I {"order_id":"3","shop_id":"1","user_id":"3","original_price":"4","create_time":"2024-01-01 20:02:00.000"}
  • 下面针对保序任务设计聚合SQL:

    sql 复制代码
    select
        t.shop_id                                  as shop_id,
        to_date(cast(t.create_time as string))     as create_date,
        sum(t.original_price)                      as original_amt,
        sum(1)                                     as order_num,
        count(distinct t.order_id)                 as order_cnt
    from ods_order_source t
    group by
        t.shop_id,
        to_date(cast(t.create_time as string))
    ;

    测试结果:

    json 复制代码
    +I {"order_id":1,"create_date":2024-01-01,"original_amt":1,"order_num":1,"order_cnt":1}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":1,"order_num":1,"order_cnt":1}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":3,"order_num":2,"order_cnt":2}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":3,"order_num":2,"order_cnt":2}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":2,"order_num":1,"order_cnt":1}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":2,"order_num":1,"order_cnt":1}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":5,"order_num":2,"order_cnt":2}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":5,"order_num":2,"order_cnt":2}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":9,"order_num":3,"order_cnt":3}

    可以从聚合SQL输出结果看出,最后数据为{"order_id":1,"create_date":2024-01-01,"original_amt":9,"order_num":3,"order_cnt":3},结果和源输入数据对应上。

  • 下面同样的SQL针对未保序源数据做聚合,看下结果如何:

    sql 复制代码
    select
        t.shop_id                                  as shop_id,
        to_date(cast(t.create_time as string))     as create_date,
        sum(t.original_price)                      as original_amt,
        sum(1)                                     as order_num,
        count(distinct t.order_id)                 as order_cnt
    from tbl_order_source t
    group by
        t.shop_id,
        to_date(cast(t.create_time as string))
    ;

    查看结果:

    复制代码
    +I {"order_id":1,"create_date":2024-01-01,"original_amt":1,"order_num":1,"order_cnt":1}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":1,"order_num":1,"order_cnt":1}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":3,"order_num":2,"order_cnt":2}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":3,"order_num":2,"order_cnt":2}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":6,"order_num":3,"order_cnt":2}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":6,"order_num":3,"order_cnt":2}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":10,"order_num":4,"order_cnt":3}
    -U {"order_id":1,"create_date":2024-01-01,"original_amt":10,"order_num":4,"order_cnt":3}
    +U {"order_id":1,"create_date":2024-01-01,"original_amt":15,"order_num":4,"order_cnt":3}

    最后结果{"order_id":1,"create_date":2024-01-01,"original_amt":15,"order_num":4,"order_cnt":3}错误。

​ 针对测试结果,对于聚合SQL数据sink设置shop_id作为key,就可以保证下游数据结果的正确性。

相关推荐
淡酒交魂16 分钟前
「Flink」业务搭建方法总结
大数据·数据挖掘·数据分析
mask哥21 分钟前
详解flink java基础(一)
java·大数据·微服务·flink·实时计算·领域驱动
TDengine (老段)25 分钟前
TDengine IDMP 高级功能(4. 元素引用)
大数据·数据库·人工智能·物联网·数据分析·时序数据库·tdengine
livemetee1 小时前
Flink2.0学习笔记:Flink服务器搭建与flink作业提交
大数据·笔记·学习·flink
zhang98800003 小时前
储能领域大数据平台的设计中如何使用 Hadoop、Spark、Flink 等组件实现数据采集、清洗、存储及实时 / 离线计算,支持储能系统分析与预测
大数据·hadoop·spark
老蒋新思维3 小时前
存量竞争下的破局之道:品牌与IP的双引擎策略|创客匠人
大数据·网络·知识付费·创客匠人·知识变现
Lx3524 小时前
Hadoop日志分析实战:快速定位问题的技巧
大数据·hadoop
喂完待续7 小时前
【Tech Arch】Hive技术解析:大数据仓库的SQL桥梁
大数据·数据仓库·hive·hadoop·sql·apache
SelectDB8 小时前
5000+ 中大型企业首选的 Doris,在稳定性的提升上究竟花了多大的功夫?
大数据·数据库·apache
最初的↘那颗心8 小时前
Flink Stream API 源码走读 - window 和 sum
大数据·hadoop·flink·源码·实时计算·窗口函数