回归与分类的评价指标

cross_validatecross_val_score中,参数scoring,与分类、聚类和回归算法的评价指标有关。

3.4.3. The scoring parameter: defining model evaluation rules

For the most common use cases, you can designate a scorer object with the scoring parameter via a string name; the table below shows all possible values. All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as 'neg_mean_squared_error' which return the negated value of the metric

对于最常见的用例,您可以通过字符串名称使用 scoring 参数指定一个评分对象;下表显示了所有可能的值。所有评分对象都遵循这样的约定:返回值越高越好。因此,像 metrics.mean_squared_error 这样衡量模型与数据之间距离的指标,会以 'neg_mean_squared_error' 的形式提供,返回该指标的负值。

1、分类

字符串 函数 公式
accuracy metrics.accuracy_score a c c u r a c y ( y , y ^ ) = 1 n ∑ i = 0 n − 1 1 ( y ^ i = y i ) accuracy(y,\hat{y}) = \frac{1}{n}\sum_{i=0}^{n-1}1(\hat{y}_i=y_i) accuracy(y,y^)=n1∑i=0n−11(y^i=yi)
balanced_accuracy metrics.balanced_accuracy_score b a l a n c e d − a c c u r a c y = 1 2 ( T P T P + F N + T N T N + F P ) balanced-accuracy=\frac{1}{2}(\frac{TP}{TP+FN}+\frac{TN}{TN+FP}) balanced−accuracy=21(TP+FNTP+TN+FPTN)
top_k_accuracy metrics.top_k_accuracy_score t o p − k a c c u r a c y ( y , y ^ ) = 1 n ∑ i = 0 n − 1 ∑ j = 1 k 1 ( f ^ i , j = y i ) top-k\ \ accuracy(y,\hat{y}) = \frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=1}^{k}1(\hat{f}_{i,j}=y_i) top−k accuracy(y,y^)=n1∑i=0n−1∑j=1k1(f^i,j=yi)
average_precision metrics.average_precision_score A P = ∑ n ( R n − R n − 1 ) P n AP = \sum_{n}(R_n-R_{n-1})P_n AP=∑n(Rn−Rn−1)Pn
neg_brier_score metrics.brier_score_loss B S = 1 n ∑ i = 0 n − 1 ( y i − p i ) 2 = 1 n ∑ i = 0 n − 1 ( y i − p r e d i c t _ p r o b a ( y = 1 ) ) 2 BS= \frac{1}{n}\sum_{i=0}^{n-1}(y_i-p_i)^2=\frac{1}{n}\sum_{i=0}^{n-1}(y_i-predict\_{proba}(y=1))^2 BS=n1∑i=0n−1(yi−pi)2=n1∑i=0n−1(yi−predict_proba(y=1))2
f1 metrics.f1_score F 1 = 2 ∗ T P 2 ∗ T P + F P + F N F1=\frac{2*TP}{2*TP+FP+FN} F1=2∗TP+FP+FN2∗TP (average parameter)
neg_log_loss metrics.log_loss L l o g ( y , p ) = − l o g P r ( y ∣ p ) = − ( y l o g ( p ) + ( 1 − y ) l o g ( 1 − p ) ) L_{log}(y,p)=-logPr(y|p)=-(ylog(p)+(1-y)log(1-p)) Llog(y,p)=−logPr(y∣p)=−(ylog(p)+(1−y)log(1−p)) L l o g ( Y , P ) = − l o g P r ( Y ∣ P ) = − 1 N ∑ i = 0 N − 1 ∑ k = 0 K − 1 y i , k l o g p i , k L_{log}(Y,P)=-logPr(Y|P)=-\frac{1}{N}\sum_{i=0}^{N-1}\sum_{k=0}^{K-1}y_{i,k}logp_{i,k} Llog(Y,P)=−logPr(Y∣P)=−N1∑i=0N−1∑k=0K−1yi,klogpi,k
precision metrics.precision_score P = T P T P + F P P=\frac{TP}{TP+FP} P=TP+FPTP
recall metrics.recall_score R = T P T P + F N R=\frac{TP}{TP+FN} R=TP+FNTP
jaccard metrics.jaccard_score J ( y , y ^ ) = y ⋂ y ^ y ⋃ y ^ J(y,\hat{y})=\frac{y\bigcap\hat{y}}{y\bigcup\hat{y}} J(y,y^)=y⋃y^y⋂y^
roc_auc metrics.roc_auc_score Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores
相关推荐
TF男孩7 小时前
ARQ:一款低成本的消息队列,实现每秒万级吞吐
后端·python·消息队列
该用户已不存在12 小时前
Mojo vs Python vs Rust: 2025年搞AI,该学哪个?
后端·python·rust
站大爷IP14 小时前
Java调用Python的5种实用方案:从简单到进阶的全场景解析
python
用户83562907805119 小时前
从手动编辑到代码生成:Python 助你高效创建 Word 文档
后端·python
c8i19 小时前
python中类的基本结构、特殊属性于MRO理解
python
liwulin050620 小时前
【ESP32-CAM】HELLO WORLD
python
Doris_202320 小时前
Python条件判断语句 if、elif 、else
前端·后端·python
Doris_202320 小时前
Python 模式匹配match case
前端·后端·python
这里有鱼汤21 小时前
Python量化实盘踩坑指南:分钟K线没处理好,小心直接亏钱!
后端·python·程序员
大模型真好玩21 小时前
深入浅出LangGraph AI Agent智能体开发教程(五)—LangGraph 数据分析助手智能体项目实战
人工智能·python·mcp