Flink Table API/SQL 多分支sink

背景

在某个场景中,需要从Kafka中获取数据,经过转换处理后,需要同时sink到多个输出源中(kafka、mysql、hologres)等。两次调用execute, 阿里云Flink vvr引擎报错:

java 复制代码
public static void main(String[] args) {
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
        StreamStatementSet streamStatementSet = tEnv.createStatementSet();

        String s = LocalDateTimeUtils.getDateTime(System.currentTimeMillis());

        DataStream<String> dataStream = env.fromElements(s, LocalDateTimeUtils.getDateTime(System.currentTimeMillis()));

        tEnv.executeSql(KAFKA_TABLE_SQL);
        tEnv.executeSql(KAFKA_TABLE_SQL_1);


        Table table = tEnv.fromDataStream(dataStream);
        table.insertInto("kafka_sink").execute();
        table.insertInto("kafka_sink_1").execute();

        streamStatementSet.execute();
    }
java 复制代码
Caused by: org.apache.flink.util.FlinkRuntimeException: Cannot have more than one execute() or executeAsync() call in a single environment.
	at org.apache.flink.client.program.StreamContextEnvironment.validateAllowedExecution(StreamContextEnvironment.java:199) ~[flink-dist-1.15-vvr-6.0.7-1-SNAPSHOT.jar:1.15-vvr-6.0.7-1-SNAPSHOT]
	at org.apache.flink.client.program.StreamContextEnvironment.executeAsync(StreamContextEnvironment.java:187) ~[flink-dist-1.15-vvr-6.0.7-1-SNAPSHOT.jar:1.15-vvr-6.0.7-1-SNAPSHOT]
	at org.apache.flink.table.planner.delegation.DefaultExecutor.executeAsync(DefaultExecutor.java:110) ~[?:?]
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:877) ~[flink-table-api-java-uber-1.15-vvr-6.0.7-1-SNAPSHOT.jar:1.15-vvr-6.0.7-1-SNAPSHOT]
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:756) ~[flink-table-api-java-uber-1.15-vvr-6.0.7-1-SNAPSHOT.jar:1.15-vvr-6.0.7-1-SNAPSHOT]
	at org.apache.flink.table.api.internal.TableEnvironmentImpl.executeInternal(TableEnvironmentImpl.java:955) ~[flink-table-api-java-uber-1.15-vvr-6.0.7-1-SNAPSHOT.jar:1.15-vvr-6.0.7-1-SNAPSHOT]
	at org.apache.flink.table.api.internal.TablePipelineImpl.execute(TablePipelineImpl.java:57) ~[flink-table-api-java-uber-1.15-vvr-6.0.7-1-SNAPSHOT.jar:1.15-vvr-6.0.7-1-SNAPSHOT]

解决

使用 StreamStatementSet. 具体参考官网:

https://nightlies.apache.org/flink/flink-docs-release-1.15/zh/docs/dev/table/data_stream_api/#converting-between-datastream-and-table

改良后的代码:

java 复制代码
public static void main(String[] args) {
        final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        StreamTableEnvironment tEnv = StreamTableEnvironment.create(env);
        StreamStatementSet streamStatementSet = tEnv.createStatementSet();

        String s = LocalDateTimeUtils.getDateTime(System.currentTimeMillis());

        DataStream<String> dataStream = env.fromElements(s, LocalDateTimeUtils.getDateTime(System.currentTimeMillis()));

        tEnv.executeSql(KAFKA_TABLE_SQL);
        tEnv.executeSql(KAFKA_TABLE_SQL_1);


        Table table = tEnv.fromDataStream(dataStream);

        streamStatementSet.addInsert("kafka_sink", table);
        streamStatementSet.addInsert("kafka_sink_1", table);

        streamStatementSet.execute();
    }
相关推荐
渣渣盟5 小时前
基于Scala实现Flink的三种基本时间窗口操作
开发语言·flink·scala
网安INF5 小时前
CVE-2020-17519源码分析与漏洞复现(Flink 任意文件读取)
java·web安全·网络安全·flink·漏洞
一叶知秋哈5 小时前
Java应用Flink CDC监听MySQL数据变动内容输出到控制台
java·mysql·flink
代码匠心11 小时前
从零开始学Flink:揭开实时计算的神秘面纱
java·大数据·后端·flink
Apache Flink1 天前
Flink在B站的大规模云原生实践
大数据·云原生·flink
lifallen1 天前
Flink checkpoint
java·大数据·算法·flink
长河2 天前
Flink 重启后事件被重复消费的原因与解决方案
大数据·flink
网安INF2 天前
CVE-2020-17518源码分析与漏洞复现(Flink 路径遍历)
java·web安全·网络安全·flink·漏洞
火龙谷3 天前
【hadoop】Flink安装部署
flink
£菜鸟也有梦3 天前
从0到1,带你走进Flink的世界
大数据·hadoop·flink·spark