目录
[调整的 R²](#调整的 R²)
[F1 分数(F1 Score)](#F1 分数(F1 Score))
[ROC 曲线与 AUC](#ROC 曲线与 AUC)
[混淆矩阵(Confusion Matrix)](#混淆矩阵(Confusion Matrix))
[轮廓系数(Silhouette Coefficient)](#轮廓系数(Silhouette Coefficient))
[调整兰德指数(Adjusted Rand Index, ARI)](#调整兰德指数(Adjusted Rand Index, ARI))
[对数损失(Log Loss)](#对数损失(Log Loss))
[Jaccard 相似系数(Jaccard Similarity)](#Jaccard 相似系数(Jaccard Similarity))
机器学习评估指标可以根据任务类型(分类、回归、聚类等)进行分类。以下是对常见评估指标的介绍
回归任务评估指标
均方误差(MSE)
预测值与真实值之差的平方的平均值

python
## 回归任务评价指标
# 1. 均方误差 MSE
from sklearn.metrics import mean_squared_error
y_true = [3.0, 2.5, 4.0, 5.0, 3.5]
y_pred = [2.8, 2.7, 4.2, 4.8, 3.4]
mse = mean_squared_error(y_true, y_pred)
print(f"MSE: {mse:.2f}") # 0.03
均方根误差(RMSE)
MSE 的平方根,与目标变量单位一致,更易于解释

python
# 2. 均方根误差 RMSE
from sklearn.metrics import mean_squared_error
y_true = [3.0, 2.5, 4.0, 5.0, 3.5]
y_pred = [2.8, 2.7, 4.2, 4.8, 3.4]
mse = mean_squared_error(y_true, y_pred)
rmse = mse ** 0.5
print(f"RMSE: {rmse:.2f}") # 0.18
平均绝对误差(MAE)
预测值与真实值之差的绝对值的平均值

python
# 3. 平均绝对误差 MAE
from sklearn.metrics import mean_absolute_error
y_true = [3.0, 2.5, 4.0, 5.0, 3.5]
y_pred = [2.8, 2.7, 4.2, 4.8, 3.4]
mae = mean_absolute_error(y_true, y_pred)
print(f"MAE: {mae:.2f}") # 0.18
决定系数(R²)
衡量模型对因变量方差的解释程度,取值范围在 0 到 1 之间

python
# 4. 决定系数 R²
from sklearn.metrics import r2_score
y_true = [3.0, 2.5, 4.0, 5.0, 3.5]
y_pred = [2.8, 2.7, 4.2, 4.8, 3.4]
r2 = r2_score(y_true, y_pred)
print(f"R²: {r2:.2f}") # 0.95
调整的 R²
考虑模型中特征数量对 R² 的影响,用于比较不同复杂度的回归模型

python
# 5. 调整的 R²
from sklearn.metrics import r2_score
y_true = [3.0, 2.5, 4.0, 5.0, 3.5]
y_pred = [2.8, 2.7, 4.2, 4.8, 3.4]
n = len(y_true)
k = 1 # 假设模型中只有一个特征
adjusted_r2 = 1 - (1 - r2) * (n - 1) / (n - k - 1)
print(f"Adjusted R²: {adjusted_r2:.2f}") # 0.94
平均绝对百分比误差(MAPE)
预测值与真实值之差的绝对值与真实值的比例的平均值

python
# 6. 平均绝对百分比误差 MAPE
import numpy as np
y_true = [3.0, 2.5, 4.0, 5.0, 3.5]
y_pred = [2.8, 2.7, 4.2, 4.8, 3.4]
mape = np.mean(np.abs((np.array(y_true) - np.array(y_pred)) / np.array(y_true))) * 100
print(f"MAPE: {mape:.2f}%") # 5.30%
绝对相对误差(ARD)
预测值与真实值之差的绝对值与真实值的比值

python
# 7. 绝对相对误差 ARD
import numpy as np
y_true = np.array([3.0, 2.5, 4.0, 5.0, 3.5])
y_pred = np.array([2.8, 2.7, 4.2, 4.8, 3.4])
ard = np.abs((y_pred - y_true) / y_true)
# 打印每个样本的 ARD
print("ARD (每个样本的绝对相对偏差):")
for i, error in enumerate(ard):
print(f"样本 {i+1}: {error:.2f}%")
# 样本 1: 0.07%
# 样本 2: 0.08%
# 样本 3: 0.05%
# 样本 4: 0.04%
# 样本 5: 0.03%
平均绝对相对误差(AARD)
所有样本的绝对相对偏差的平均值

python
# 8. 平均绝对相对误差
import numpy as np
y_true = np.array([3.0, 2.5, 4.0, 5.0, 3.5])
y_pred = np.array([2.8, 2.7, 4.2, 4.8, 3.4])
ard = np.abs((y_pred - y_true) / y_true)
aard = np.mean(ard)
print(f"AARD (平均绝对相对偏差): {aard:.2f}%") # 0.05%
分类任务评估指标
准确率(Accuracy)
模型正确预测的样本数占总样本数的比例

python
## 分类任务评价指标
# 1. 准确率 Accuracy
from sklearn.metrics import accuracy_score
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
accuracy = accuracy_score(y_true, y_pred)
print(f"Accuracy: {accuracy:.2f}") # 0.88
精确率(Precision)
模型预测为正的样本中实际为正的比例

python
# 2. 精确率 Precision
from sklearn.metrics import precision_score
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
precision = precision_score(y_true, y_pred)
print(f"Precision: {precision:.2f}") # 1.00
召回率(Recall)
实际为正的样本中被模型正确预测为正的比例

python
# 3. 召回率 Recall
from sklearn.metrics import recall_score
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
recall = recall_score(y_true, y_pred)
print(f"Recall: {recall:.2f}") # 0.80
F1 分数(F1 Score)
精确率和召回率的调和平均值,用于综合衡量模型性能

python
# 4. F1分数
from sklearn.metrics import f1_score
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
f1 = f1_score(y_true, y_pred)
print(f"F1 Score: {f1:.2f}") # 0.89
ROC 曲线与 AUC
ROC 曲线以真正例率(TPR)为纵轴、假正例率(FPR)为横轴,AUC 是 ROC 曲线下的面积,衡量模型的整体性能
python
# 5. ROC 曲线与 AUC
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_samples=1000, n_features=20, n_classes=2, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
model = LogisticRegression(random_state=42)
model.fit(X_train, y_train)
y_scores = model.predict_proba(X_test)[:, 1]
# 计算 FPR 和 TPR
fpr, tpr, thresholds = roc_curve(y_test, y_scores)
# 计算 AUC
roc_auc = auc(fpr, tpr)
print(f"AUC: {roc_auc:.4f}") # 0.9216
# 绘制 ROC 曲线
plt.figure()
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (area = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate (FPR)')
plt.ylabel('True Positive Rate (TPR)')
plt.title('Receiver Operating Characteristic (ROC)')
plt.legend(loc="lower right")
plt.show()
混淆矩阵(Confusion Matrix)
一个表格,显示模型预测结果与真实标签的对应关系,包括真正例(TP)、假正例(FP)、真负例(TN)和假负例(FN)
python
# 6. 混淆矩阵
from sklearn.metrics import confusion_matrix
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
cm = confusion_matrix(y_true, y_pred)
print("Confusion Matrix:\n", cm)
# [[3 0]
# [1 4]]
聚类任务评估指标
轮廓系数(Silhouette Coefficient)
衡量聚类效果的好坏,值越接近 1 表示聚类效果越好。a是同一聚类中其他点的平均距离,b是最近聚类中点的平均距离

python
## 聚类任务评价指标
# 1. 轮廓系数 Silhouette Coefficient
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
import numpy as np
# 示例数据
X = np.array([[1, 2], [1, 4], [1, 0],
[4, 2], [4, 4], [4, 0]])
kmeans = KMeans(n_clusters=2, random_state=0).fit(X)
labels = kmeans.labels_
silhouette = silhouette_score(X, labels)
print(f"Silhouette Coefficient: {silhouette:.2f}") # 0.17
调整兰德指数(Adjusted Rand Index, ARI)
衡量聚类结果与真实标签的一致性,值越接近 1 表示一致性越高
python
# 2. 调整兰德指数 Adjusted Rand Index, ARI
from sklearn.metrics import adjusted_rand_score
y_true = [0, 0, 1, 1]
y_pred = [0, 0, 1, 1]
ari = adjusted_rand_score(y_true, y_pred)
print(f"Adjusted Rand Index: {ari:.2f}") # 1.00
其他评估指标
对数损失(Log Loss)
衡量分类模型输出概率与真实标签的一致性,值越小越好

python
## 其他评估指标
# 1. 对数损失 Log Loss
from sklearn.metrics import log_loss
y_true = [0, 1, 1, 0]
y_pred_proba = [[0.9, 0.1], [0.1, 0.9], [0.8, 0.2], [0.7, 0.3]]
log_loss_value = log_loss(y_true, y_pred_proba)
print(f"Log Loss: {log_loss_value:.4f}") # 0.5442
Jaccard 相似系数(Jaccard Similarity)
衡量两个集合的交集与并集的比例,用于评估分类任务中预测和真实标签的相似性

python
# 2. Jaccard 相似系数
from sklearn.metrics import jaccard_score
y_true = [0, 1, 1, 0, 1, 0, 1, 1]
y_pred = [0, 1, 1, 0, 0, 0, 1, 1]
jaccard = jaccard_score(y_true, y_pred)
print(f"Jaccard Similarity: {jaccard:.2f}") # 0.80