FuSAGNet Evaluation Analysis

FuSAGNet Evaluation Analysis

  • [1. Procedure](#1. Procedure)
    • [1.1 Calculating test_score_forecasting](#1.1 Calculating test_score_forecasting)
    • [1.2 Calculating test_score_reconstruction](#1.2 Calculating test_score_reconstruction)
    • [1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean)](#1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean))
    • [1.4 Calculating F1, Precision and Recall](#1.4 Calculating F1, Precision and Recall)
      • [Major function name:](#Major function name:)
      • Procedure:
      • [1. Calculating the max number in every single timestamp among 51 features](#1. Calculating the max number in every single timestamp among 51 features)
      • [2. Calculating threshold](#2. Calculating threshold)
        • [Fistly, ranking total_topk_err_scores which is a array from last step.](#Fistly, ranking total_topk_err_scores which is a array from last step.)
        • [Giving every item in total_top_err_scores a order](#Giving every item in total_top_err_scores a order)
        • [Generating a list ranging from 0 to 400 and step is 0.0025](#Generating a list ranging from 0 to 400 and step is 0.0025)
        • [Getting a bool list according this code](#Getting a bool list according this code)
        • [Try every threshold and get the best one](#Try every threshold and get the best one)
        • [Finally, we can get a f1_score list](#Finally, we can get a f1_score list)
  • [2. Summary](#2. Summary)

1. Procedure

1.1 Calculating test_score_forecasting

Function Name:

  • get_full_err_scores()
Params Description
test_result Predicted y_hat by best_model on test dataset
val_result Predicted y_hat by best_model on valid dataset

Purpose:

Getting smoothed_score for every feature, respectively on test_result and val_result

  • get_err_scores
Params Description
err_scores (test_delta - n_err_mid) / (np.abs(n_err_iqr) + epsilon)
test_delta set before testing default is 0.5
n_err_mid median from np.substruct(pred_data, ground_truth)
n_err_iqr quartile about np.substruct(pred_data, ground_truth)
err_scores a raw in smoothed_scores

Return:

  • all_socres(calculating from get_full_err_score and concat together) and all_norms

1.2 Calculating test_score_reconstruction

Function Name: get_full_err_scores()

Same sas above

1.3 IMPORTANT: Calculating test_score_whm(Weighted Harmonic Mean)

The previous two stpes are prepared for calculating whm

具体来说,Weighted Harmonic Mean 函数通过给定的输入值 x1 和 x2,以及它们的权重 w1 和 w2,计算这两个值的加权调和平均值。这可以用于根据权重对不同数值进行加权平均,以反映它们在总平均值中的相对重要性。

函数中的 epsilon 用于确保分母不会为零,从而避免出现除法错误。这种方式可以增加数学计算的稳定性,特别是在输入值或权重值中存在较小的数值时。

这个函数对于某些数学模型、统计分析或工程应用中可能会有用,特别是当需要考虑不同变量之间的权重关系时。 Weighted Harmonic Mean 是一种通用的统计工具,可用于多种情况下的加权平均计算。

Calculating whm of every feature and finally conacting them to one matrix.

Finally we get test_score_whm which will be used in next step

1.4 Calculating F1, Precision and Recall

Major function name:

get_best_performance_data()

Procedure:

1. Calculating the max number in every single timestamp among 51 features

NOTICE: If topk==1, below are calculating the max number in every column

py 复制代码
total_features = total_err_scores.shape[0]
    topk_indices = np.argpartition(
        total_err_scores, range(total_features - topk - 1, total_features), axis=0
    )[-topk:]  # 获取每一个步长中51个特征最大的值
    total_topk_err_scores = np.sum(
        np.take_along_axis(total_err_scores, topk_indices, axis=0), axis=0
    )  # 到这里为止都是多此一举,完全可以直接提取total_err_scores每一列最大的值

2. Calculating threshold

py 复制代码
final_topk_fmeas, thresholds = eval_scores(
    total_topk_err_scores, gt_labels, 400, return_threshold=True
)
Fistly, ranking total_topk_err_scores which is a array from last step.
Giving every item in total_top_err_scores a order
Generating a list ranging from 0 to 400 and step is 0.0025
Getting a bool list according this code
py 复制代码
cur_pred = scores_sorted > th_vals[i] * len(scores)
Try every threshold and get the best one
py 复制代码
for i in range(th_steps):
    cur_pred = scores_sorted > th_vals[i] * len(scores)
    fmeas[i] = f1_score(true_scores, cur_pred)
    score_index = scores_sorted.tolist().index(int(th_vals[i] * len(scores) + 1))
    thresholds[i] = scores[score_index]
Finally, we can get a f1_score list

2. Summary

Honestly, the most important step is calculating that WHM list. After that we just need get the biggest one in every column of that WHM matrix. Fianlly, setting a threshold rage and trying every one.

相关推荐
lambo mercy23 分钟前
无监督学习
人工智能·深度学习
柠柠酱1 小时前
【深度学习Day4】告别暴力拉平!MATLAB老鸟带你拆解CNN核心:卷积与池化 (附高频面试考点)
深度学习
向量引擎小橙1 小时前
推理革命与能耗:AI大模型应用落地的“冰山成本”与破局之路
大数据·人工智能·深度学习·集成学习
学好statistics和DS1 小时前
卷积神经网络中的反向传播
人工智能·神经网络·cnn
rayufo1 小时前
深度学习对三维图形点云数据分类
人工智能·深度学习·分类
_codemonster3 小时前
计算机视觉入门到实战系列(九) SIFT算法(尺度空间、极值点判断)
深度学习·算法·计算机视觉
一瞬祈望4 小时前
⭐ 深度学习入门体系(第 11 篇): 卷积神经网络的卷积核是如何学习到特征的?
深度学习·学习·cnn
极客小云4 小时前
【手搓神经网络:从零实现三层BP神经网络识别手写数字】
人工智能·深度学习·神经网络
Bug改不动了5 小时前
在 Ubuntu 上用 Python 3.8 + RTX 4090 安装 Detectron2 完整指南
人工智能·深度学习
2501_936146045 小时前
工业零件视觉识别与定位系统_基于cascade-rcnn的实现
人工智能·深度学习·计算机视觉