Spark Plan 之 SQLMetric

SQLMetric

Spark Plan 包含以下基本 方法,

复制代码
  /**
   * @return All metrics containing metrics of this SparkPlan.
   */
  def metrics: Map[String, SQLMetric] = Map.empty

  /**
   * @return [[SQLMetric]] for the `name`.
   */
  def longMetric(name: String): SQLMetric = metrics(name)
BytesToBytesMap
复制代码
public final class BytesToBytesMap extends MemoryConsumer {

  /**
   * Returns the average number of probes per key lookup.
   */
  public double getAvgHashProbesPerKey() {
    return (1.0 * numProbes) / numKeyLookups;
  }

  /**
   * Looks up a key, and saves the result in provided `loc`.
   *
   * This is a thread-safe version of `lookup`, could be used by multiple threads.
   */
  public void safeLookup(Object keyBase, long keyOffset, int keyLength, Location loc, int hash) {
    assert(longArray != null);

    numKeyLookups++;

    int pos = hash & mask;
    int step = 1;
    while (true) {
      numProbes++;
      if (longArray.get(pos * 2) == 0) {
        // This is a new key.
        loc.with(pos, hash, false);
        return;
      } else {
        long stored = longArray.get(pos * 2 + 1);
        if ((int) (stored) == hash) {
          // Full hash code matches.  Let's compare the keys for equality.
          loc.with(pos, hash, true);
          if (loc.getKeyLength() == keyLength) {
            final boolean areEqual = ByteArrayMethods.arrayEquals(
              keyBase,
              keyOffset,
              loc.getKeyBase(),
              loc.getKeyOffset(),
              keyLength
            );
            if (areEqual) {
              return;
            }
          }
        }
      }
      pos = (pos + step) & mask;
      step++;
    }
  }

这里, hashprobe 本质上就是一个 hash lookup 的过程

相关推荐
鸿乃江边鸟2 天前
Spark Datafusion Comet 向量化Rust Native--Native算子ScanExec以及涉及到的Selection Vectors
大数据·rust·spark·arrow
派可数据BI可视化3 天前
一文读懂系列:数据仓库为什么分层,分几层?数仓建模方法有哪些
大数据·数据仓库·信息可视化·spark·商业智能bi
码字的字节3 天前
锚点模型:数据仓库中的高度可扩展建模技术详解
大数据·数据仓库·spark
数据知道3 天前
PostgreSQL:详解 PostgreSQL 与Hadoop与Spark的集成
hadoop·postgresql·spark
鸿乃江边鸟5 天前
Spark Datafusion Comet 向量化Rust Native--Native算子指标如何传递到Spark UI上展示
rust·spark·native
uesowys6 天前
Apache Spark算法开发指导-K-means
算法·spark·kmeans
uesowys7 天前
Apache Spark算法开发指导-Gradient-boosted tree regression
算法·spark
uesowys8 天前
Apache Spark算法开发指导-Random forest regression
算法·spark
DisonTangor8 天前
介绍 GPT‑5.3‑Codex‑Spark
大数据·gpt·spark
小邓睡不饱耶8 天前
Hadoop 3.x实战:基于HDFS+Spark+Flink的实时用户行为分析平台(含Kerberos安全配置+冷热数据分层)
hadoop·hdfs·spark