Spark 之 partitons

Listing leaf files and directories

分析其并行化

org.apache.spark.util.HadoopFSUtils

复制代码
      sc.parallelize(paths, numParallelism)
        .mapPartitions { pathsEachPartition =>
          val hadoopConf = serializableConfiguration.value
          pathsEachPartition.map { path =>
            val leafFiles = listLeafFiles(
              path = path,
              hadoopConf = hadoopConf,
              filter = filter,
              contextOpt = None, // Can't execute parallel scans on workers
              ignoreMissingFiles = ignoreMissingFiles,
              ignoreLocality = ignoreLocality,
              isRootPath = isRootLevel,
              parallelismThreshold = Int.MaxValue,
              parallelismMax = 0)
            (path, leafFiles)
          }
        }.collect()

    // Set the number of parallelism to prevent following file listing from generating many tasks
    // in case of large #defaultParallelism.
    val numParallelism = Math.min(paths.size, parallelismMax)

parallelismMax 最终由以下配置决定。

复制代码
  val PARALLEL_PARTITION_DISCOVERY_PARALLELISM =
    buildConf("spark.sql.sources.parallelPartitionDiscovery.parallelism")
      .doc("The number of parallelism to list a collection of path recursively, Set the " +
        "number to prevent file listing from generating too many tasks.")
      .version("2.1.1")
      .internal()
      .intConf
      .createWithDefault(10000)
相关推荐
Apache Flink23 分钟前
Flink Forward Asia 2025 城市巡回 · 深圳站
大数据·flink
Hello.Reader23 分钟前
Flink DataStream API 打包使用 MySQL CDC 连接器
大数据·mysql·flink
2021_fc24 分钟前
Flink入门指南:使用Java构建第一个Flink应用
java·大数据·flink
Hello.Reader25 分钟前
Streaming ELT with Flink CDC · Iceberg Sink
大数据·flink
RPA机器人就选八爪鱼28 分钟前
RPA财务机器人:驱动财务数字化转型的核心引擎
大数据·运维·人工智能·机器人·rpa
2021_fc28 分钟前
Flink快速入门--安装与示例运行
大数据·flink
J***Q29239 分钟前
大数据存储优化策略
大数据
2501_941142641 小时前
云计算与大数据:现代企业数字化转型的双引擎
spark
二进制_博客1 小时前
flume抽取kafka数据到kafka,数据无法从topicA抽取到topicB
大数据·kafka·flume
她说..3 小时前
基于Redis实现的分布式唯一编号生成工具类
java·数据库·redis·分布式·springboot