Spark Catalog

#iceberg catalog

https://iceberg.apache.org/docs/latest/spark-configuration/

相关接口

  /**
   * (Scala-specific)
   * Create a table from the given path based on a data source, a schema and a set of options.
   * Then, returns the corresponding DataFrame.
   *
   * @param tableName is either a qualified or unqualified name that designates a table.
   *                  If no database identifier is provided, it refers to a table in
   *                  the current database.
   * @since 2.0.0
   */
  @deprecated("use createTable instead.", "2.2.0")
  def createExternalTable(
      tableName: String,
      source: String,
      schema: StructType,
      options: Map[String, String]): DataFrame = {
    createTable(tableName, source, schema, options)
  }

  /**
   * (Scala-specific)
   * Create a table based on the dataset in a data source, a schema and a set of options.
   * Then, returns the corresponding DataFrame.
   *
   * @param tableName is either a qualified or unqualified name that designates a table.
   *                  If no database identifier is provided, it refers to a table in
   *                  the current database.
   * @since 2.2.0
   */
  def createTable(
      tableName: String,
      source: String,
      schema: StructType,
      options: Map[String, String]): DataFrame

hive metastore

The default implementation of the Hive metastore in Apache Spark uses Apache Derby for its database persistence. This is available with no configuration required but is limited to only one Spark session at any time for the purposes of metadata storage. This obviously makes it unsuitable for use in multi-user environments, such as when shared on a development team or used in Production.

相关推荐
W Y12 小时前
【架构-37】Spark和Flink
架构·flink·spark
数新网络15 小时前
《深入浅出Apache Spark》系列②:Spark SQL原理精髓全解析
大数据·sql·spark
天冬忘忧1 天前
Spark 程序开发与提交:本地与集群模式全解析
大数据·分布式·spark
全栈开发圈1 天前
新书速览|Spark SQL大数据分析快速上手
sql·数据分析·spark
天冬忘忧1 天前
Spark on YARN:Spark集群模式之Yarn模式的原理、搭建与实践
大数据·spark
出发行进1 天前
PySpark本地开发环境搭建
大数据·python·数据分析·spark·anaconda
Mephisto.java1 天前
【大数据学习 | kafka高级部分】文件清除原理
大数据·hadoop·zookeeper·spark·kafka·hbase·flume
青春不流名1 天前
mysql-springboot netty-flink-kafka-spark(paimon)-minio
spark
小黑032 天前
Spark资源调度和任务调度
大数据·分布式·spark
大数据编程之光2 天前
【spark面试题】RDD和DataFrame以及DataSet有什么异同
大数据·分布式·spark