Flink 本地启动的多种方式

Application模式通过代码提交到Yarn上启动

java 复制代码
//设置Yarn客户端
YarnClient yarnClient = ;
Configuration configuration = new Configuration();
if (customConfiguration != null) {
  configuration.addAll(customConfiguration);
}
configuration.set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024m"));
configuration.set(TaskManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024m"));
configuration.set(DeploymentOptions.TARGET, YarnDeploymentTarget.APPLICATION.getName());
// 设置flink-dist-???.jar
String distPath = ;
configuration.set(YarnConfigOptions.FLINK_DIST_JAR, distPath);
// 设置需要执行的jar包
String examplePath = ;
configuration.set(PipelineOptions.JARS, Collections.singletonList(examplePath));
FileSystem fileSystem = FileSystem.get(hadoopClusterTest.getConfig());
//设置flink lib
String dirPath = ;
// 上传flink libjar包到hdfs中
fileSystem.copyFromLocalFile(new Path(dirPath), new Path(dirPath));
configuration.set(YarnConfigOptions.PROVIDED_LIB_DIRS, Collections.singletonList(dirPath));
setIfAbsent(configuration, PipelineOptions.JARS, new ArrayList<>());
YarnConfiguration yarnConfiguration = new YarnConfiguration();
YarnClientYarnClusterInformationRetriever yarnClientYarnClusterInformationRetriever =
  YarnClientYarnClusterInformationRetriever.create(yarnClient);
YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor(
  configuration,
  yarnConfiguration,
  yarnClient,
  yarnClientYarnClusterInformationRetriever,
  true
);
ClusterSpecification clusterSpecification = new ClusterSpecification
  .ClusterSpecificationBuilder()
  .setSlotsPerTaskManager(1)
  .createClusterSpecification();

ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(
  new String[]{},
  // 需要执行的类全名
);
try {
  // 启动ApplicationCluster
  yarnClusterDescriptor.deployApplicationCluster(clusterSpecification, applicationConfiguration);
} catch (ClusterDeploymentException e) {
  e.printStackTrace();
}

Session模式通过代码提交到Yarn上启动

java 复制代码
public class YarnFlinkSessionTest {
    ClusterClient<ApplicationId> clusterClient;
    @Test
    void test() throws ExecutionException, InterruptedException {
        YarnClient yarnClient = //创建Yarn客户端
        Configuration configuration = new Configuration();
        configuration.set(JobManagerOptions.TOTAL_PROCESS_MEMORY,
            MemorySize.parse("1024m"));
        configuration.set(TaskManagerOptions.TOTAL_PROCESS_MEMORY,
            MemorySize.parse("1024m"));
        configuration.set(YarnConfigOptions.FLINK_DIST_JAR, "${FLINK_HOME}/lib/flink-dist-1.16.2.jar");
        YarnConfiguration yarnConfiguration = new YarnConfiguration();
        YarnClientYarnClusterInformationRetriever yarnClientYarnClusterInformationRetriever =
            YarnClientYarnClusterInformationRetriever.create(yarnClient);
        YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor(
            configuration,
            yarnConfiguration,
            yarnClient,
            yarnClientYarnClusterInformationRetriever,
            true
        );
        ClusterSpecification clusterSpecification = new ClusterSpecification.ClusterSpecificationBuilder()
            .setMasterMemoryMB(1024)
            .setTaskManagerMemoryMB(1024)
            .setSlotsPerTaskManager(1)
            .createClusterSpecification();
        try {
            ClusterClientProvider<ApplicationId> applicationIdClusterClientProvider = yarnClusterDescriptor.deploySessionCluster(clusterSpecification);
            clusterClient = applicationIdClusterClientProvider.getClusterClient();
        } catch (ClusterDeploymentException e) {
            e.printStackTrace();
        }
        Thread.sleep(10000000);
    }
}

MiniCluster在start方法中启动QueryService、RPCService、Zookeeper、BlobServer、TaskManager、DispatcherLeader、ResourceManager、DispatcherGateway、WebMonitor进行RPC通信。。

MiniCluster启动后再调用submitJob提交任务

RpcTaskManagerGateway、TaskExecutor

命令行Flink本地Standalone模式启动

运行任务:

./bin/flink run ./examples/streaming/TopSpeedWindowing.jar

  1. 该命令会调用CliFrontend.main()方法

  2. CliFrontend.main()方法再调用内部run()方法,然后调用内部executeProgram()方法

  3. 最后CliFrontend.executeProgram()调用ClientUtils.executeProgram()方法.

  4. 最后通过StandloneSessionClusterEntrypoint的main方法启动Flink

RestServerEndpoint在执行start()方法时注册Netty的ChannelHandler,可以通过WebMonitorEndpoint查看具体的Handler类型和实现。

JobManager::onStart -> JobMaster::startJobExecution

官方文档命令行启动

yarn: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/yarn/

kubernetes: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/native_kubernetes/

standalone: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/standalone/overview/

相关推荐
qq_5470261791 小时前
Elasticsearch 正排索引
大数据·elasticsearch·jenkins
宝哥大数据2 小时前
Flinksql--订单宽表
大数据·flink
jinan8864 小时前
企业的移动终端安全怎么管理?
大数据·网络·安全·数据分析·开源软件
叶辰 .5 小时前
ES使用聚合aggregations实战(2025.04.02更新)
大数据·elasticsearch·jenkins
zxsz_com_cn5 小时前
风电行业预测性维护解决方案:AIoT驱动下的风机健康管理革命
大数据·运维·人工智能
说私域14 小时前
基于开源AI大模型与S2B2C模式的线下服务型门店增长策略研究——以AI智能名片与小程序源码技术为核心
大数据·人工智能·小程序·开源
V_HY1476214 小时前
AI碰一碰发视频获客工具,系统开发逻辑详细解析
大数据·人工智能·新媒体运营·流量运营
遇码14 小时前
单机快速部署开源、免费的分布式任务调度系统——DolphinScheduler
大数据·运维·分布式·开源·定时任务·dolphin·scheduler
一个天蝎座 白勺 程序猿15 小时前
大数据(4.2)Hive核心操作实战指南:表创建、数据加载与分区/分桶设计深度解析
大数据·hive·hadoop
计算机毕设定制辅导-无忧学长15 小时前
TDengine 核心概念与时序数据模型深度解析(一)
大数据·时序数据库·tdengine