【Flink-scala】DataStream编程模型之延迟数据处理

DataStream API编程模型

1.【Flink-Scala】DataStream编程模型之数据源、数据转换、数据输出

2.【Flink-scala】DataStream编程模型之 窗口的划分-时间概念-窗口计算程序

3.【Flink-scala】DataStream编程模型之水位线


文章目录


一、延迟数据处理

前一小节已经讲了水位线的相关概念,默认情况下,当水位线超过窗口结束时间之后,再有之前的数据到达时,这些数据会被删除。

为了避免有些迟到的数据被删除,因此产生了allowedLateness的概念。

allowedLateness就是针对事件时间而言,对于水位线超过窗口结束时间之后,还允许有一段时间(也是以事件时间来衡量)来等待之前的数据到达,以便再次处理这些数据。

对于窗口计算而言,如果没有设置allowedLateness,窗口触发计算以后就会被销毁

设置了allowedLateness以后,只有水位线大于"窗口结束时间+allowedLateness"时,窗口才会被销毁。

当没有指定allowedLateness,默认值为0.

1.1 侧输出

通常情况下,用户虽然希望对迟到的数据进行窗口计算,但并不想将结果混入正常的计算流程中,而是想将延迟数据和结果保存到数据库中,便于后期对延时数据进行分析。对这种情况,就需要借助于"侧输出"(Side Output)来处理

用sideOutputLateData(OutputTag)来标记迟到数据计算的结果,然后再使用getSideOutput(lateOutputTag)从窗口中获取lateOutputTag标签对应的数据,之后转成独立的DataStream数据集进行处理

1.2代码实例

如下:

scala 复制代码
import java.text.SimpleDateFormat
import org.apache.flink.api.common.eventtime.{SerializableTimestampAssigner, TimestampAssigner, TimestampAssignerSupplier, Watermark, WatermarkGenerator, WatermarkGeneratorSupplier, WatermarkOutput, WatermarkStrategy}
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.TimeCharacteristic
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows
import org.apache.flink.streaming.api.windowing.time.Time

case class StockPrice(stockId:String,timeStamp:Long,price:Double)

object AllowedLatenessTest {
  def main(args: Array[String]): Unit = {
 
    //设定执行环境
    val env = StreamExecutionEnvironment.getExecutionEnvironment
 
    //设定时间特性为事件时间
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
 
    //设定程序并行度
    env.setParallelism(1)

    //创建数据源
    val source = env.socketTextStream("localhost", 9999)



    //指定针对数据流的转换操作逻辑
    val stockDataStream = source
      .map(s => s.split(","))
      .map(s=>StockPrice(s(0).toString,s(1).toLong,s(2).toDouble))



 //为数据流分配时间戳和水位线
    val watermarkDataStream = stockDataStream.assignTimestampsAndWatermarks(new MyWatermarkStrategy)
 
    //执行窗口计算
    val lateData = new OutputTag[StockPrice]("late")
    val sumStream = watermarkDataStream
      .keyBy("stockId")
      .window(TumblingEventTimeWindows.of(Time.seconds(3)))
      .allowedLateness(Time.seconds(2L))//注意这里
      .sideOutputLateData(lateData)
      .reduce((s1, s2) => StockPrice(s1.stockId,s1.timeStamp, s1.price + s2.price))
    //打印输出
    sumStream.print("window计算结果:")
    val late = sumStream.getSideOutput(lateData)
    late.print("迟到的数据:")
 
    //指定名称并触发流计算
    env.execute("AllowedLatenessTest")
  }

  //指定水位线生成策略
  class MyWatermarkStrategy extends WatermarkStrategy[StockPrice] {
 
    override def createTimestampAssigner(context:TimestampAssignerSupplier.Context):TimestampAssigner[StockPrice]={
      new SerializableTimestampAssigner[StockPrice] {
        override def extractTimestamp(element: StockPrice, recordTimestamp: Long): Long = {
          element.timeStamp //从到达消息中提取时间戳
        }
      }
    }
 override def createWatermarkGenerator(context:WatermarkGeneratorSupplier.Context): WatermarkGenerator[StockPrice] ={
      new WatermarkGenerator[StockPrice](){
        val maxOutOfOrderness = 10000L //设定最大延迟为10秒
        var currentMaxTimestamp: Long = 0L
        var a: Watermark = null
        val format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")
 
        override def onEvent(element: StockPrice, eventTimestamp: Long, output:WatermarkOutput): Unit = {
          currentMaxTimestamp = Math.max(eventTimestamp, currentMaxTimestamp)
          a = new Watermark(currentMaxTimestamp - maxOutOfOrderness)
          output.emitWatermark(a)
          println("timestamp:" + element.stockId + "," + element.timeStamp + "|" + format.format(element.timeStamp) + "," + currentMaxTimestamp + "|" + format.format(currentMaxTimestamp) + "," + a.toString)
        }
override def onPeriodicEmit(output:WatermarkOutput): Unit = {
          // 没有使用周期性发送水印,因此这里没有执行任何操作
        }
      }
    }
  }
}

先注意这里:

scala 复制代码
  val watermarkDataStream = stockDataStream.assignTimestampsAndWatermarks(new MyWatermarkStrategy)
 
    //执行窗口计算
    val lateData = new OutputTag[StockPrice]("late")
    val sumStream = watermarkDataStream
      .keyBy("stockId")
      .window(TumblingEventTimeWindows.of(Time.seconds(3)))
      .allowedLateness(Time.seconds(2L))   //注意这里
      .sideOutputLateData(lateData)
      .reduce((s1, s2) => StockPrice(s1.stockId,s1.timeStamp, s1.price + s2.price))

允许元素延迟到达最多 2 秒。即,如果一个元素的时间戳在窗口结束后的 2 秒内到达,它仍然会被包含在窗口的计算中。

scala 复制代码
 .allowedLateness(Time.seconds(2L))  设置时间为2s。

其余的就是生成水位线和时间戳了,这就不用解释啦。

1.2.1 代码运行结果

1.在nc端输入

stock_1,1602031567000,8.14

stock_1,1602031571000,8.23

stock_1,1602031577000,8.24

stock_1,1602031578000,8.87

stock_1,1602031579000,8.55

stock_1,1602031577000,8.24

stock_1,1602031581000,8.43

stock_1,1602031582000,8.78

stock_1,1602031581000,8.76

stock_1,1602031579000,8.55

stock_1,1602031591000,8.13

stock_1,1602031581000,8.34

stock_1,1602031580000,8.45

stock_1,1602031579000,8.33

stock_1,1602031578000,8.56

stock_1,1602031577000,8.32

然后启动程序

分析:水位线生成策略:

每次收到一个新的事件,都会比较当前事件的时间戳和之前记录的最大时间戳,更新 currentMaxTimestamp。

水位线被设定为 currentMaxTimestamp - maxOutOfOrderness,即允许最大 10 秒的延迟

再来看迟到数据:

输入数据是:
stock_1,1602031567000,8.14
stock_1,1602031571000,8.23
stock_1,1602031577000,8.24
stock_1,1602031578000,8.87
stock_1,1602031579000,8.55
stock_1,1602031577000,8.24
stock_1,1602031581000,8.43
stock_1,1602031582000,8.78
stock_1,1602031581000,8.76
stock_1,1602031579000,8.55
stock_1,1602031591000,8.13
stock_1,1602031581000,8.34
stock_1,1602031580000,8.45
stock_1,1602031579000,8.33
stock_1,1602031578000,8.56
stock_1,1602031577000,8.32

数据超出窗口结算数据后2s到达算迟到,任何时间戳比水位线晚 10 秒的事件都会被视为迟到。

之前我看书上写的有点疑问,然后拥有环境的网站也有点问题,我就自己本地了,windows 下载了个netcat,运行结果如下:

bash 复制代码
timestamp:stock_1,1602031567000|2020-10-07 08:46:07.000,1602031567000|2020-10-07 08:46:07.000,Watermark @ 1602031557000 (2020-10-07 08:45:57.000)
timestamp:stock_1,1602031571000|2020-10-07 08:46:11.000,1602031571000|2020-10-07 08:46:11.000,Watermark @ 1602031561000 (2020-10-07 08:46:01.000)
timestamp:stock_1,1602031577000|2020-10-07 08:46:17.000,1602031577000|2020-10-07 08:46:17.000,Watermark @ 1602031567000 (2020-10-07 08:46:07.000)
timestamp:stock_1,1602031578000|2020-10-07 08:46:18.000,1602031578000|2020-10-07 08:46:18.000,Watermark @ 1602031568000 (2020-10-07 08:46:08.000)
timestamp:stock_1,1602031579000|2020-10-07 08:46:19.000,1602031579000|2020-10-07 08:46:19.000,Watermark @ 1602031569000 (2020-10-07 08:46:09.000)
timestamp:stock_1,1602031577000|2020-10-07 08:46:17.000,1602031579000|2020-10-07 08:46:19.000,Watermark @ 1602031569000 (2020-10-07 08:46:09.000)
timestamp:stock_1,1602031581000|2020-10-07 08:46:21.000,1602031581000|2020-10-07 08:46:21.000,Watermark @ 1602031571000 (2020-10-07 08:46:11.000)
timestamp:stock_1,1602031582000|2020-10-07 08:46:22.000,1602031582000|2020-10-07 08:46:22.000,Watermark @ 1602031572000 (2020-10-07 08:46:12.000)
timestamp:stock_1,1602031581000|2020-10-07 08:46:21.000,1602031582000|2020-10-07 08:46:22.000,Watermark @ 1602031572000 (2020-10-07 08:46:12.000)
timestamp:stock_1,1602031579000|2020-10-07 08:46:19.000,1602031582000|2020-10-07 08:46:22.000,Watermark @ 1602031572000 (2020-10-07 08:46:12.000)
timestamp:stock_1,1602031591000|2020-10-07 08:46:31.000,1602031591000|2020-10-07 08:46:31.000,Watermark @ 1602031581000 (2020-10-07 08:46:21.000)
timestamp:stock_1,1602031581000|2020-10-07 08:46:21.000,1602031591000|2020-10-07 08:46:31.000,Watermark @ 1602031581000 (2020-10-07 08:46:21.000)
timestamp:stock_1,1602031580000|2020-10-07 08:46:20.000,1602031591000|2020-10-07 08:46:31.000,Watermark @ 1602031581000 (2020-10-07 08:46:21.000)
timestamp:stock_1,1602031579000|2020-10-07 08:46:19.000,1602031591000|2020-10-07 08:46:31.000,Watermark @ 1602031581000 (2020-10-07 08:46:21.000)
timestamp:stock_1,1602031578000|2020-10-07 08:46:18.000,1602031591000|2020-10-07 08:46:31.000,Watermark @ 1602031581000 (2020-10-07 08:46:21.000)
window计算结果:> StockPrice(stock_1,1602031567000,8.14)
window计算结果:> StockPrice(stock_1,1602031571000,8.23)
window计算结果:> StockPrice(stock_1,1602031577000,16.48)
window计算结果:> StockPrice(stock_1,1602031578000,25.970000000000002)
window计算结果:> StockPrice(stock_1,1602031578000,34.42)
window计算结果:> StockPrice(stock_1,1602031578000,42.75)
window计算结果:> StockPrice(stock_1,1602031578000,51.31)
timestamp:stock_1,1602031577000|2020-10-07 08:46:17.000,1602031591000|2020-10-07 08:46:31.000,Watermark @ 1602031581000 (2020-10-07 08:46:21.000)
迟到的数据:> StockPrice(stock_1,1602031577000,8.32)

这里是windows运行结果,可能稍微和ubuntu不一样

1.2.2 结果分析

  • stock_1,1602031567000,8.14
    水位线 557
    窗口时间**[567,570)**
    当前事件:stock_1,1602031567000,8.14
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031567000,8.14)
  • stock_1,1602031571000,8.23
    水位线: 561
    窗口时间:[571,573)
    当前窗口事件 stock_1,1602031571000,8.23
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031571000,8.23)
  • stock_1,1602031577000,8.24
    水位线:567
    窗口时间:[577,580)
    当前窗口事件:stock_1,1602031577000,8.24
    8.24+8.23
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031577000,16.48)
  • stock_1,1602031578000,8.87
    水位线:568
    窗口时间:和上一个相同 [577,580)
    当前窗口事件:stock_1,1602031578000,8.87
    stock_1,1602031577000,8.24
    16.48+8.87
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031578000,25.970000000000002)
  • stock_1,1602031579000,8.55
    水位线:569
    窗口时间:和上一个相同 [577,580)
    当前窗口事件(3个)
    stock_1,1602031578000,8.87
    stock_1,1602031577000,8.24
    stock_1,1602031579000,8.55
    25.97+8.55
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031578000,34.42)
  • stock_1,1602031577000,8.24
    水位线:这里-10s为567,最大水位线为569,那么最大水位线569
    窗口时间: 属于[567,570) 没有迟到
    当前窗口事件:
    stock_1,1602031567000,8.14
    stock_1,1602031577000,8.24
    34.42+8.24
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031578000,42.75)
  • stock_1,1602031581000,8.43
    水位线:571
    窗口时间:[581,584)
    当前窗口事件 stock_1,1602031581000,8.43
    注意 571水位线,第一个事件事件窗口时间[567,570),这个就应该结束计算了、
bash 复制代码
window计算结果:> StockPrice(stock_1,1602031578000,51.31)
bash 复制代码
之后就没了
为什么没了呢,窗口还没计算完呢,一直在监听啊。
水位线也不涨了
  • stock_1,1602031582000,8.78

    水位线:572

    窗口时间:[581,584)

    当前窗口事件:
    stock_1,1602031581000,8.43

    stock_1,1602031582000,8.78

  • stock_1,1602031581000,8.76

    水位线:572

    窗口时间:[581,584)

    当前窗口事件:
    stock_1,1602031581000,8.43
    stock_1,1602031582000,8.78

    stock_1,1602031581000,8.76

  • stock_1,1602031579000,8.55

    水位线:572

    窗口时间:[577,580)

    当前窗口事件:(4个)
    stock_1,1602031578000,8.87
    stock_1,1602031577000,8.24
    stock_1,1602031579000,8.55

    stock_1,1602031579000,8.55

  • stock_1,1602031591000,8.13

    水位线:581

    窗口时间:[591,594)

    当前窗口事件:

    stock_1,1602031591000,8.13

  • stock_1,1602031581000,8.34

    -水位线:581

    窗口时间:[581,584)

    (此时:[577,580)就应该结束了)

    窗口事件:
    stock_1,1602031581000,8.43
    stock_1,1602031582000,8.78
    stock_1,1602031581000,8.76

    stock_1,1602031581000,8.34

  • stock_1,1602031580000,8.45

    水位线:581

    窗口时间 [580,583)

    窗口事件:stock_1,1602031580000,8.45

  • stock_1,1602031579000,8.33

    水位线:581

    窗口时间:[577,580) 水位线在581的时候就结束了,但它在 allowedLateness 的 2 秒内到达,因此它 不会被视为迟到数据。

  • stock_1,1602031578000,8.56

    水位线:581

    窗口时间属于 [577,580)

    判断水位线和时间戳的关系:水位线超过窗口结束时间

    事件是否迟到:允许超出2s,正好就在580的边界,根据 Flink 的延迟数据规则,任何事件的时间戳如果大于当前窗口结束时间,并且超出了 allowedLateness(即 2 秒),就会被视为迟到数据。这个事件 stock_1,1602031578000,8.56 的时间戳刚好等于窗口 [577, 580) 的结束时间,并且它的时间戳在当前窗口结束后刚好到达,所以 它不会被视为迟到数据。

找了chatgpt问了一下:

这个事件不算迟到数据。尽管它的时间戳正好是窗口结束的边界,但它仍然属于这个窗口 [577, 580) 的一部分,不超出窗口时间范围,也没有超过 allowedLateness 设置的 2 秒,所以它被处理为正常数据。
为什么不算迟到?
Flink 的窗口操作是基于事件时间戳的,窗口 [577, 580) 的结束时间是 1602031578000。当水位线达到 581 时,窗口已经结束,但如果事件的时间戳恰好是窗口的结束时间,它依然被认为是属于该窗口的。
迟到数据的定义:迟到数据是指事件的时间戳超过了当前窗口结束时间,并且超出了允许的迟到时间。在这种情况下,事件时间戳与窗口结束时间对齐,因此并不算迟到。

  • stock_1,1602031577000,8.32

    水位线 581

    窗口时间:[577,580)

    577+2<581

    当前事件的时间戳 1602031577000 恰好等于窗口的起始时间 577,但水位线已经推进到了 581。

    窗口已经结束:水位线达到 581 时,窗口 [577, 580) 的数据已经完全处理完毕,因此任何时间戳在 580 之后到达的事件,都被视为迟到数据。

  • 说明:其中的删除线,我是手动分析的他们如何迟到,但是后来代码发现,数据是一个个来的,你没有超出水位线,那么我就立马把你这条数据计算了,因此输出结果就是累加起来的。

1.3代码运行展示

本来使用的是头歌平台提供的环境,但是环境有问题了,自己手动搭建了一下。幸亏是socket,windows还能实现监听自己,就把代码放在这里啦!

pom.xml

bash 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>
    <artifactId>Flink_scala2.13</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <flink.version>1.11.0</flink.version> <!-- 使用 Flink 1.15.0 -->
        <target.java.version>1.8</target.java.version>
        <scala.binary.version>2.12</scala.binary.version>
        <scala.version>2.12.16</scala.version>
    </properties>

    <dependencies>
        <!-- 引入 Flink 相关依赖-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <!--			<scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <!--			<scope>provided</scope>-->
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
            <!--			<scope>provided</scope>-->
        </dependency>
    </dependencies>
</project>

test1.scala (AllowedLatenessTest函数)

bash 复制代码
/*
stock_1,1602031567000,8.14
stock_1,1602031571000,8.23
stock_1,1602031577000,8.24
stock_1,1602031578000,8.87
stock_1,1602031579000,8.55
stock_1,1602031577000,8.24
stock_1,1602031581000,8.43
stock_1,1602031582000,8.78
stock_1,1602031581000,8.76
stock_1,1602031579000,8.55
stock_1,1602031591000,8.13
stock_1,1602031581000,8.34
stock_1,1602031580000,8.45
stock_1,1602031579000,8.33
stock_1,1602031578000,8.56
stock_1,1602031577000,8.32

*/
import java.text.SimpleDateFormat
import org.apache.flink.api.common.eventtime.{SerializableTimestampAssigner, TimestampAssigner, TimestampAssignerSupplier, Watermark, WatermarkGenerator, WatermarkGeneratorSupplier, WatermarkOutput, WatermarkStrategy}
import org.apache.flink.streaming.api.scala._
import org.apache.flink.streaming.api.TimeCharacteristic
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows
import org.apache.flink.streaming.api.windowing.time.Time
case class StockPrice(stockId:String,timeStamp:Long,price:Double)

object test1 {
  def main(args: Array[String]): Unit = {
    // *************************** Begin ****************************
    //设定执行环境
    val env = StreamExecutionEnvironment.getExecutionEnvironment
    //设定时间特性为事件时间
    env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
    //设定程序并行度
    env.setParallelism(1)
    //创建数据源
    val source = env.socketTextStream("localhost", 9999)
    //指定针对数据流的转换操作逻辑
    val stockDataStream = source
      .map(s => s.split(","))
      .map(s=>StockPrice(s(0).toString,s(1).toLong,s(2).toDouble))
    //为数据流分配时间戳和水位线
    val watermarkDataStream = stockDataStream.assignTimestampsAndWatermarks(new MyWatermarkStrategy)
    //执行窗口计算
    val lateData = new OutputTag[StockPrice]("late")
    val sumStream = watermarkDataStream
      .keyBy("stockId")
      .window(TumblingEventTimeWindows.of(Time.seconds(3)))
      .allowedLateness(Time.seconds(2L))
      .sideOutputLateData(lateData)
      .reduce((s1, s2) => StockPrice(s1.stockId,s1.timeStamp, s1.price + s2.price))
    // **************************** End *****************************
    //打印输出
    sumStream.print("window计算结果:")
    val late = sumStream.getSideOutput(lateData)
    late.print("迟到的数据:")
    //指定名称并触发流计算
    env.execute("AllowedLatenessTest")
  }
  //指定水位线生成策略
  class MyWatermarkStrategy extends WatermarkStrategy[StockPrice] {
    override def createTimestampAssigner(context:TimestampAssignerSupplier.Context):TimestampAssigner[StockPrice]={
      new SerializableTimestampAssigner[StockPrice] {
        override def extractTimestamp(element: StockPrice, recordTimestamp: Long): Long = {
          element.timeStamp //从到达消息中提取时间戳
        }
      }
    }
    override def createWatermarkGenerator(context:WatermarkGeneratorSupplier.Context): WatermarkGenerator[StockPrice] ={
      new WatermarkGenerator[StockPrice](){
        val maxOutOfOrderness = 10000L //设定最大延迟为10秒
        var currentMaxTimestamp: Long = 0L
        var a: Watermark = null
        val format = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS")
        override def onEvent(element: StockPrice, eventTimestamp: Long, output:WatermarkOutput): Unit = {
          currentMaxTimestamp = Math.max(eventTimestamp, currentMaxTimestamp)
          a = new Watermark(currentMaxTimestamp - maxOutOfOrderness)
          output.emitWatermark(a)
          println("timestamp:" + element.stockId + "," + element.timeStamp + "|" + format.format(element.timeStamp) + "," + currentMaxTimestamp + "|" + format.format(currentMaxTimestamp) + "," + a.toString)
        }
        override def onPeriodicEmit(output:WatermarkOutput): Unit = {
          // 没有使用周期性发送水印,因此这里没有执行任何操作
        }
      }
    }
  }
}

这个是头歌提供的pom.xml,二者应该都能使用,但是你需要下载flink和scala,这不再多说,版本还要匹配哦

bash 复制代码
<project>
    <groupId>cn.edu.xmu.dblab</groupId>
    <artifactId>wordcount_myID</artifactId>
    <modelVersion>4.0.0</modelVersion>
    <name>WordCount</name>
    <packaging>jar</packaging>
    <version>1.0</version>
    <repositories>
        <repository>
            <id>alimaven</id>
            <name>aliyun maven</name>
            <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        </repository>
    </repositories>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <encoding>UTF-8</encoding>
        <scala.version>2.12.2</scala.version>
        <scala.binary.version>2.12</scala.binary.version>
        <hadoop.version>2.7.7</hadoop.version>
        <flink.version>1.11.2</flink.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>${hadoop.version}</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.38</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.22</version>
        </dependency>

        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-redis_2.11</artifactId>
            <version>1.1.5</version>

        </dependency>

        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>2.8.0</version>
            <scope>compile</scope>
        </dependency>

    </dependencies>
<build>
        <plugins>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.0</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.0.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>

1.4 题外话

我使用的课本是厦门大学林子雨老师编写的《Flink编程基础》,当前我用的书是2021年9月第一版,第5.6章节的延迟数据处理中(201页),最后迟到的数据8.43写错了,应该是:8.32。正是书上写错了,我才有点疑问发现此处的问题。

相关推荐
知识分享小能手5 分钟前
Java学习教程,从入门到精通,Java Stack(堆栈)语法知识点及语法知识点(58)
java·大数据·开发语言·学习·intellij-idea·java后端·java开发
xyz201126 分钟前
Flink State面试题和参考答案-(上)
大数据·面试·flink
爬台阶的蚂蚁1 小时前
ES倒排索引实现? ES 索引文档过程?ES并发下读写一致?
大数据·elasticsearch·搜索引擎
慧慧不会yhm2 小时前
Scala提取器使用模式匹配
开发语言·后端·scala
代码欢乐豆2 小时前
NoSQL大数据存储技术测试(7)键值对数据库Redis和其他NoSQL数据库
大数据·数据库·nosql
꧁薄暮꧂3 小时前
Apache Kylin最简单的解析、了解
大数据·kylin
HuailiShang5 小时前
浅议Flink lib包下的依赖项
flink
CodeCraft Studio11 小时前
什么是定性数据分析?有哪些定性数据分析技术及应用实践?
大数据·人工智能·数据分析