FuSAGNet Evaluation Analysis

FuSAGNet Evaluation Analysis

  • [1. Procedure](#1. Procedure)
    • [1.1 Calculating test_score_forecasting](#1.1 Calculating test_score_forecasting)
    • [1.2 Calculating test_score_reconstruction](#1.2 Calculating test_score_reconstruction)
    • [1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean)](#1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean))
    • [1.4 Calculating F1, Precision and Recall](#1.4 Calculating F1, Precision and Recall)
      • [Major function name:](#Major function name:)
      • Procedure:
      • [1. Calculating the max number in every single timestamp among 51 features](#1. Calculating the max number in every single timestamp among 51 features)
      • [2. Calculating threshold](#2. Calculating threshold)
        • [Fistly, ranking total_topk_err_scores which is a array from last step.](#Fistly, ranking total_topk_err_scores which is a array from last step.)
        • [Giving every item in total_top_err_scores a order](#Giving every item in total_top_err_scores a order)
        • [Generating a list ranging from 0 to 400 and step is 0.0025](#Generating a list ranging from 0 to 400 and step is 0.0025)
        • [Getting a bool list according this code](#Getting a bool list according this code)
        • [Try every threshold and get the best one](#Try every threshold and get the best one)
        • [Finally, we can get a f1_score list](#Finally, we can get a f1_score list)
  • [2. Summary](#2. Summary)

1. Procedure

1.1 Calculating test_score_forecasting

Function Name:

  • get_full_err_scores()
Params Description
test_result Predicted y_hat by best_model on test dataset
val_result Predicted y_hat by best_model on valid dataset

Purpose:

Getting smoothed_score for every feature, respectively on test_result and val_result

  • get_err_scores
Params Description
err_scores (test_delta - n_err_mid) / (np.abs(n_err_iqr) + epsilon)
test_delta set before testing default is 0.5
n_err_mid median from np.substruct(pred_data, ground_truth)
n_err_iqr quartile about np.substruct(pred_data, ground_truth)
err_scores a raw in smoothed_scores

Return:

  • all_socres(calculating from get_full_err_score and concat together) and all_norms

1.2 Calculating test_score_reconstruction

Function Name: get_full_err_scores()

Same sas above

1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean)

The previous two stpes are prepared for calculating whm

具体来说,Weighted Harmonic Mean 函数通过给定的输入值 x1 和 x2,以及它们的权重 w1 和 w2,计算这两个值的加权调和平均值。这可以用于根据权重对不同数值进行加权平均,以反映它们在总平均值中的相对重要性。

函数中的 epsilon 用于确保分母不会为零,从而避免出现除法错误。这种方式可以增加数学计算的稳定性,特别是在输入值或权重值中存在较小的数值时。

这个函数对于某些数学模型、统计分析或工程应用中可能会有用,特别是当需要考虑不同变量之间的权重关系时。 Weighted Harmonic Mean 是一种通用的统计工具,可用于多种情况下的加权平均计算。

Calculating whm of every feature and finally conacting them to one matrix.

Finally we get test_score_whm which will be used in next step

1.4 Calculating F1, Precision and Recall

Major function name:

get_best_performance_data()

Procedure:

1. Calculating the max number in every single timestamp among 51 features

NOTICE: If topk==1, below are calculating the max number in every column

py 复制代码
total_features = total_err_scores.shape[0]
    topk_indices = np.argpartition(
        total_err_scores, range(total_features - topk - 1, total_features), axis=0
    )[-topk:]  # 获取每一个步长中51个特征最大的值
    total_topk_err_scores = np.sum(
        np.take_along_axis(total_err_scores, topk_indices, axis=0), axis=0
    )  # 到这里为止都是多此一举,完全可以直接提取total_err_scores每一列最大的值

2. Calculating threshold

py 复制代码
final_topk_fmeas, thresholds = eval_scores(
    total_topk_err_scores, gt_labels, 400, return_threshold=True
)
Fistly, ranking total_topk_err_scores which is a array from last step.
Giving every item in total_top_err_scores a order
Generating a list ranging from 0 to 400 and step is 0.0025
Getting a bool list according this code
py 复制代码
cur_pred = scores_sorted > th_vals[i] * len(scores)
Try every threshold and get the best one
py 复制代码
for i in range(th_steps):
    cur_pred = scores_sorted > th_vals[i] * len(scores)
    fmeas[i] = f1_score(true_scores, cur_pred)
    score_index = scores_sorted.tolist().index(int(th_vals[i] * len(scores) + 1))
    thresholds[i] = scores[score_index]
Finally, we can get a f1_score list

2. Summary

Honestly, the most important step is calculating that WHM list. After that we just need get the biggest one in every column of that WHM matrix. Fianlly, setting a threshold rage and trying every one.

相关推荐
金融小师妹6 小时前
多因子量化模型预警:美元强势因子压制金价失守4000关口,ADP数据能否重构黄金趋势?
人工智能·深度学习·1024程序员节
Dfreedom.7 小时前
卷积神经网络(CNN)全面解析
人工智能·神经网络·cnn·卷积神经网络
曼城的天空是蓝色的7 小时前
GroupNet:基于多尺度神经网络的交互推理轨迹预测
深度学习·计算机视觉
B站_计算机毕业设计之家8 小时前
深度血虚:Django水果检测识别系统 CNN卷积神经网络算法 python语言 计算机 大数据✅
python·深度学习·计算机视觉·信息可视化·分类·cnn·django
Francek Chen8 小时前
【自然语言处理】预训练05:全局向量的词嵌入(GloVe)
人工智能·pytorch·深度学习·自然语言处理·glove
悠闲蜗牛�9 小时前
技术融合新纪元:深度学习、大数据与云原生的跨界实践
大数据·深度学习·云原生
嵌入式-老费13 小时前
自己动手写深度学习框架(感知机)
人工智能·深度学习
化作星辰13 小时前
使用 PyTorch来构建线性回归的实现
人工智能·pytorch·深度学习
谢景行^顾13 小时前
深度学习-损失函数
人工智能·深度学习
极客BIM工作室14 小时前
单层前馈神经网络的万能逼近定理
人工智能·深度学习·神经网络