FuSAGNet Evaluation Analysis

FuSAGNet Evaluation Analysis

  • [1. Procedure](#1. Procedure)
    • [1.1 Calculating test_score_forecasting](#1.1 Calculating test_score_forecasting)
    • [1.2 Calculating test_score_reconstruction](#1.2 Calculating test_score_reconstruction)
    • [1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean)](#1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean))
    • [1.4 Calculating F1, Precision and Recall](#1.4 Calculating F1, Precision and Recall)
      • [Major function name:](#Major function name:)
      • Procedure:
      • [1. Calculating the max number in every single timestamp among 51 features](#1. Calculating the max number in every single timestamp among 51 features)
      • [2. Calculating threshold](#2. Calculating threshold)
        • [Fistly, ranking total_topk_err_scores which is a array from last step.](#Fistly, ranking total_topk_err_scores which is a array from last step.)
        • [Giving every item in total_top_err_scores a order](#Giving every item in total_top_err_scores a order)
        • [Generating a list ranging from 0 to 400 and step is 0.0025](#Generating a list ranging from 0 to 400 and step is 0.0025)
        • [Getting a bool list according this code](#Getting a bool list according this code)
        • [Try every threshold and get the best one](#Try every threshold and get the best one)
        • [Finally, we can get a f1_score list](#Finally, we can get a f1_score list)
  • [2. Summary](#2. Summary)

1. Procedure

1.1 Calculating test_score_forecasting

Function Name:

  • get_full_err_scores()
Params Description
test_result Predicted y_hat by best_model on test dataset
val_result Predicted y_hat by best_model on valid dataset

Purpose:

Getting smoothed_score for every feature, respectively on test_result and val_result

  • get_err_scores
Params Description
err_scores (test_delta - n_err_mid) / (np.abs(n_err_iqr) + epsilon)
test_delta set before testing default is 0.5
n_err_mid median from np.substruct(pred_data, ground_truth)
n_err_iqr quartile about np.substruct(pred_data, ground_truth)
err_scores a raw in smoothed_scores

Return:

  • all_socres(calculating from get_full_err_score and concat together) and all_norms

1.2 Calculating test_score_reconstruction

Function Name: get_full_err_scores()

Same sas above

1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean)

The previous two stpes are prepared for calculating whm

具体来说,Weighted Harmonic Mean 函数通过给定的输入值 x1 和 x2,以及它们的权重 w1 和 w2,计算这两个值的加权调和平均值。这可以用于根据权重对不同数值进行加权平均,以反映它们在总平均值中的相对重要性。

函数中的 epsilon 用于确保分母不会为零,从而避免出现除法错误。这种方式可以增加数学计算的稳定性,特别是在输入值或权重值中存在较小的数值时。

这个函数对于某些数学模型、统计分析或工程应用中可能会有用,特别是当需要考虑不同变量之间的权重关系时。 Weighted Harmonic Mean 是一种通用的统计工具,可用于多种情况下的加权平均计算。

Calculating whm of every feature and finally conacting them to one matrix.

Finally we get test_score_whm which will be used in next step

1.4 Calculating F1, Precision and Recall

Major function name:

get_best_performance_data()

Procedure:

1. Calculating the max number in every single timestamp among 51 features

NOTICE: If topk==1, below are calculating the max number in every column

py 复制代码
total_features = total_err_scores.shape[0]
    topk_indices = np.argpartition(
        total_err_scores, range(total_features - topk - 1, total_features), axis=0
    )[-topk:]  # 获取每一个步长中51个特征最大的值
    total_topk_err_scores = np.sum(
        np.take_along_axis(total_err_scores, topk_indices, axis=0), axis=0
    )  # 到这里为止都是多此一举,完全可以直接提取total_err_scores每一列最大的值

2. Calculating threshold

py 复制代码
final_topk_fmeas, thresholds = eval_scores(
    total_topk_err_scores, gt_labels, 400, return_threshold=True
)
Fistly, ranking total_topk_err_scores which is a array from last step.
Giving every item in total_top_err_scores a order
Generating a list ranging from 0 to 400 and step is 0.0025
Getting a bool list according this code
py 复制代码
cur_pred = scores_sorted > th_vals[i] * len(scores)
Try every threshold and get the best one
py 复制代码
for i in range(th_steps):
    cur_pred = scores_sorted > th_vals[i] * len(scores)
    fmeas[i] = f1_score(true_scores, cur_pred)
    score_index = scores_sorted.tolist().index(int(th_vals[i] * len(scores) + 1))
    thresholds[i] = scores[score_index]
Finally, we can get a f1_score list

2. Summary

Honestly, the most important step is calculating that WHM list. After that we just need get the biggest one in every column of that WHM matrix. Fianlly, setting a threshold rage and trying every one.

相关推荐
LZXCyrus13 分钟前
【杂记】vLLM如何指定GPU单卡/多卡离线推理
人工智能·经验分享·python·深度学习·语言模型·llm·vllm
YRr YRr39 分钟前
深度学习神经网络中的优化器的使用
人工智能·深度学习·神经网络
幻风_huanfeng1 小时前
人工智能之数学基础:线性代数在人工智能中的地位
人工智能·深度学习·神经网络·线性代数·机器学习·自然语言处理
deephub2 小时前
使用 PyTorch-BigGraph 构建和部署大规模图嵌入的完整教程
人工智能·pytorch·深度学习·图嵌入
羞儿2 小时前
【读点论文】Text Detection Forgot About Document OCR,很实用的一个实验对比案例,将科研成果与商业产品进行碰撞
深度学习·ocr·str·std
deephub2 小时前
优化注意力层提升 Transformer 模型效率:通过改进注意力机制降低机器学习成本
人工智能·深度学习·transformer·大语言模型·注意力机制
搏博3 小时前
神经网络问题之二:梯度爆炸(Gradient Explosion)
人工智能·深度学习·神经网络
不高明的骗子3 小时前
【深度学习之一】2024最新pytorch+cuda+cudnn下载安装搭建开发环境
人工智能·pytorch·深度学习·cuda
搏博3 小时前
神经网络问题之:梯度不稳定
人工智能·深度学习·神经网络
Sxiaocai3 小时前
使用 PyTorch 实现并训练 VGGNet 用于 MNIST 分类
pytorch·深度学习·分类