Flink 本地启动的多种方式

Application模式通过代码提交到Yarn上启动

java 复制代码
//设置Yarn客户端
YarnClient yarnClient = ;
Configuration configuration = new Configuration();
if (customConfiguration != null) {
  configuration.addAll(customConfiguration);
}
configuration.set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024m"));
configuration.set(TaskManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024m"));
configuration.set(DeploymentOptions.TARGET, YarnDeploymentTarget.APPLICATION.getName());
// 设置flink-dist-???.jar
String distPath = ;
configuration.set(YarnConfigOptions.FLINK_DIST_JAR, distPath);
// 设置需要执行的jar包
String examplePath = ;
configuration.set(PipelineOptions.JARS, Collections.singletonList(examplePath));
FileSystem fileSystem = FileSystem.get(hadoopClusterTest.getConfig());
//设置flink lib
String dirPath = ;
// 上传flink libjar包到hdfs中
fileSystem.copyFromLocalFile(new Path(dirPath), new Path(dirPath));
configuration.set(YarnConfigOptions.PROVIDED_LIB_DIRS, Collections.singletonList(dirPath));
setIfAbsent(configuration, PipelineOptions.JARS, new ArrayList<>());
YarnConfiguration yarnConfiguration = new YarnConfiguration();
YarnClientYarnClusterInformationRetriever yarnClientYarnClusterInformationRetriever =
  YarnClientYarnClusterInformationRetriever.create(yarnClient);
YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor(
  configuration,
  yarnConfiguration,
  yarnClient,
  yarnClientYarnClusterInformationRetriever,
  true
);
ClusterSpecification clusterSpecification = new ClusterSpecification
  .ClusterSpecificationBuilder()
  .setSlotsPerTaskManager(1)
  .createClusterSpecification();

ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(
  new String[]{},
  // 需要执行的类全名
);
try {
  // 启动ApplicationCluster
  yarnClusterDescriptor.deployApplicationCluster(clusterSpecification, applicationConfiguration);
} catch (ClusterDeploymentException e) {
  e.printStackTrace();
}

Session模式通过代码提交到Yarn上启动

java 复制代码
public class YarnFlinkSessionTest {
    ClusterClient<ApplicationId> clusterClient;
    @Test
    void test() throws ExecutionException, InterruptedException {
        YarnClient yarnClient = //创建Yarn客户端
        Configuration configuration = new Configuration();
        configuration.set(JobManagerOptions.TOTAL_PROCESS_MEMORY,
            MemorySize.parse("1024m"));
        configuration.set(TaskManagerOptions.TOTAL_PROCESS_MEMORY,
            MemorySize.parse("1024m"));
        configuration.set(YarnConfigOptions.FLINK_DIST_JAR, "${FLINK_HOME}/lib/flink-dist-1.16.2.jar");
        YarnConfiguration yarnConfiguration = new YarnConfiguration();
        YarnClientYarnClusterInformationRetriever yarnClientYarnClusterInformationRetriever =
            YarnClientYarnClusterInformationRetriever.create(yarnClient);
        YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor(
            configuration,
            yarnConfiguration,
            yarnClient,
            yarnClientYarnClusterInformationRetriever,
            true
        );
        ClusterSpecification clusterSpecification = new ClusterSpecification.ClusterSpecificationBuilder()
            .setMasterMemoryMB(1024)
            .setTaskManagerMemoryMB(1024)
            .setSlotsPerTaskManager(1)
            .createClusterSpecification();
        try {
            ClusterClientProvider<ApplicationId> applicationIdClusterClientProvider = yarnClusterDescriptor.deploySessionCluster(clusterSpecification);
            clusterClient = applicationIdClusterClientProvider.getClusterClient();
        } catch (ClusterDeploymentException e) {
            e.printStackTrace();
        }
        Thread.sleep(10000000);
    }
}

MiniCluster在start方法中启动QueryService、RPCService、Zookeeper、BlobServer、TaskManager、DispatcherLeader、ResourceManager、DispatcherGateway、WebMonitor进行RPC通信。。

MiniCluster启动后再调用submitJob提交任务

RpcTaskManagerGateway、TaskExecutor

命令行Flink本地Standalone模式启动

运行任务:

./bin/flink run ./examples/streaming/TopSpeedWindowing.jar

  1. 该命令会调用CliFrontend.main()方法

  2. CliFrontend.main()方法再调用内部run()方法,然后调用内部executeProgram()方法

  3. 最后CliFrontend.executeProgram()调用ClientUtils.executeProgram()方法.

  4. 最后通过StandloneSessionClusterEntrypoint的main方法启动Flink

RestServerEndpoint在执行start()方法时注册Netty的ChannelHandler,可以通过WebMonitorEndpoint查看具体的Handler类型和实现。

JobManager::onStart -> JobMaster::startJobExecution

官方文档命令行启动

yarn: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/yarn/

kubernetes: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/native_kubernetes/

standalone: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/standalone/overview/

相关推荐
Elastic 中国社区官方博客1 小时前
使用 Elastic AI Assistant for Search 和 Azure OpenAI 实现从 0 到 60 的转变
大数据·人工智能·elasticsearch·microsoft·搜索引擎·ai·azure
Francek Chen3 小时前
【大数据技术基础 | 实验十二】Hive实验:Hive分区
大数据·数据仓库·hive·hadoop·分布式
Natural_yz6 小时前
大数据学习17之Spark-Core
大数据·学习·spark
莫叫石榴姐8 小时前
数据科学与SQL:组距分组分析 | 区间分布问题
大数据·人工智能·sql·深度学习·算法·机器学习·数据挖掘
魔珐科技9 小时前
以3D数字人AI产品赋能教育培训人才发展,魔珐科技亮相AI+教育创新与人才发展大会
大数据·人工智能
上优9 小时前
uniapp 选择 省市区 省市 以及 回显
大数据·elasticsearch·uni-app
samLi062010 小时前
【更新】中国省级产业集聚测算数据及协调集聚指数数据(2000-2022年)
大数据
Mephisto.java10 小时前
【大数据学习 | Spark-Core】Spark提交及运行流程
大数据·学习·spark
EasyCVR11 小时前
私有化部署视频平台EasyCVR宇视设备视频平台如何构建视频联网平台及升级视频转码业务?
大数据·网络·音视频·h.265
hummhumm11 小时前
第 22 章 - Go语言 测试与基准测试
java·大数据·开发语言·前端·python·golang·log4j