Flink 本地启动的多种方式

Application模式通过代码提交到Yarn上启动

java 复制代码
//设置Yarn客户端
YarnClient yarnClient = ;
Configuration configuration = new Configuration();
if (customConfiguration != null) {
  configuration.addAll(customConfiguration);
}
configuration.set(JobManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024m"));
configuration.set(TaskManagerOptions.TOTAL_PROCESS_MEMORY, MemorySize.parse("1024m"));
configuration.set(DeploymentOptions.TARGET, YarnDeploymentTarget.APPLICATION.getName());
// 设置flink-dist-???.jar
String distPath = ;
configuration.set(YarnConfigOptions.FLINK_DIST_JAR, distPath);
// 设置需要执行的jar包
String examplePath = ;
configuration.set(PipelineOptions.JARS, Collections.singletonList(examplePath));
FileSystem fileSystem = FileSystem.get(hadoopClusterTest.getConfig());
//设置flink lib
String dirPath = ;
// 上传flink libjar包到hdfs中
fileSystem.copyFromLocalFile(new Path(dirPath), new Path(dirPath));
configuration.set(YarnConfigOptions.PROVIDED_LIB_DIRS, Collections.singletonList(dirPath));
setIfAbsent(configuration, PipelineOptions.JARS, new ArrayList<>());
YarnConfiguration yarnConfiguration = new YarnConfiguration();
YarnClientYarnClusterInformationRetriever yarnClientYarnClusterInformationRetriever =
  YarnClientYarnClusterInformationRetriever.create(yarnClient);
YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor(
  configuration,
  yarnConfiguration,
  yarnClient,
  yarnClientYarnClusterInformationRetriever,
  true
);
ClusterSpecification clusterSpecification = new ClusterSpecification
  .ClusterSpecificationBuilder()
  .setSlotsPerTaskManager(1)
  .createClusterSpecification();

ApplicationConfiguration applicationConfiguration = new ApplicationConfiguration(
  new String[]{},
  // 需要执行的类全名
);
try {
  // 启动ApplicationCluster
  yarnClusterDescriptor.deployApplicationCluster(clusterSpecification, applicationConfiguration);
} catch (ClusterDeploymentException e) {
  e.printStackTrace();
}

Session模式通过代码提交到Yarn上启动

java 复制代码
public class YarnFlinkSessionTest {
    ClusterClient<ApplicationId> clusterClient;
    @Test
    void test() throws ExecutionException, InterruptedException {
        YarnClient yarnClient = //创建Yarn客户端
        Configuration configuration = new Configuration();
        configuration.set(JobManagerOptions.TOTAL_PROCESS_MEMORY,
            MemorySize.parse("1024m"));
        configuration.set(TaskManagerOptions.TOTAL_PROCESS_MEMORY,
            MemorySize.parse("1024m"));
        configuration.set(YarnConfigOptions.FLINK_DIST_JAR, "${FLINK_HOME}/lib/flink-dist-1.16.2.jar");
        YarnConfiguration yarnConfiguration = new YarnConfiguration();
        YarnClientYarnClusterInformationRetriever yarnClientYarnClusterInformationRetriever =
            YarnClientYarnClusterInformationRetriever.create(yarnClient);
        YarnClusterDescriptor yarnClusterDescriptor = new YarnClusterDescriptor(
            configuration,
            yarnConfiguration,
            yarnClient,
            yarnClientYarnClusterInformationRetriever,
            true
        );
        ClusterSpecification clusterSpecification = new ClusterSpecification.ClusterSpecificationBuilder()
            .setMasterMemoryMB(1024)
            .setTaskManagerMemoryMB(1024)
            .setSlotsPerTaskManager(1)
            .createClusterSpecification();
        try {
            ClusterClientProvider<ApplicationId> applicationIdClusterClientProvider = yarnClusterDescriptor.deploySessionCluster(clusterSpecification);
            clusterClient = applicationIdClusterClientProvider.getClusterClient();
        } catch (ClusterDeploymentException e) {
            e.printStackTrace();
        }
        Thread.sleep(10000000);
    }
}

MiniCluster在start方法中启动QueryService、RPCService、Zookeeper、BlobServer、TaskManager、DispatcherLeader、ResourceManager、DispatcherGateway、WebMonitor进行RPC通信。。

MiniCluster启动后再调用submitJob提交任务

RpcTaskManagerGateway、TaskExecutor

命令行Flink本地Standalone模式启动

运行任务:

./bin/flink run ./examples/streaming/TopSpeedWindowing.jar

  1. 该命令会调用CliFrontend.main()方法

  2. CliFrontend.main()方法再调用内部run()方法,然后调用内部executeProgram()方法

  3. 最后CliFrontend.executeProgram()调用ClientUtils.executeProgram()方法.

  4. 最后通过StandloneSessionClusterEntrypoint的main方法启动Flink

RestServerEndpoint在执行start()方法时注册Netty的ChannelHandler,可以通过WebMonitorEndpoint查看具体的Handler类型和实现。

JobManager::onStart -> JobMaster::startJobExecution

官方文档命令行启动

yarn: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/yarn/

kubernetes: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/native_kubernetes/

standalone: https://nightlies.apache.org/flink/flink-docs-release-1.20/docs/deployment/resource-providers/standalone/overview/

相关推荐
九河云22 分钟前
TOS + 数字孪生:集装箱码头的智能进化密码
大数据·服务器·网络·数据库·数字化转型
说私域28 分钟前
开源链动2+1模式AI智能名片S2B2C商城小程序在竞争激烈的中低端面膜服装行业中的应用与策略
大数据·人工智能·小程序
bemyrunningdog43 分钟前
IntelliJ IDEA合并分支到master全攻略
大数据·elasticsearch·intellij-idea
孟意昶1 小时前
Doris专题17- 数据导入-文件格式
大数据·数据库·分布式·sql·doris
星光一影2 小时前
Java版小区物业管理系统/业主端/物业端/管理端/支持公众号、小程序、app
java·大数据·小程序
武子康2 小时前
大数据-125 - Flink 实时流计算中的动态逻辑更新:广播状态(Broadcast State)全解析
大数据·后端·flink
数在表哥2 小时前
从数据沼泽到智能决策:数据驱动与AI融合的中台建设方法论与技术实践指南(一)
大数据·人工智能
还是大剑师兰特3 小时前
Hadoop面试题及详细答案 110题 (71-85)-- 集群部署与运维
大数据·hadoop·大剑师·hadoop面试题
gddkxc3 小时前
悟空 AI CRM 的回款功能:加速资金回流,保障企业财务健康
大数据·人工智能·信息可视化
派可数据BI可视化3 小时前
商业智能BI与业务结构分析
大数据·数据仓库·信息可视化·数据分析·商业智能bi