回归与分类的评价指标

cross_validatecross_val_score中,参数scoring,与分类、聚类和回归算法的评价指标有关。

3.4.3. The scoring parameter: defining model evaluation rules

For the most common use cases, you can designate a scorer object with the scoring parameter via a string name; the table below shows all possible values. All scorer objects follow the convention that higher return values are better than lower return values. Thus metrics which measure the distance between the model and the data, like metrics.mean_squared_error, are available as 'neg_mean_squared_error' which return the negated value of the metric

对于最常见的用例,您可以通过字符串名称使用 scoring 参数指定一个评分对象;下表显示了所有可能的值。所有评分对象都遵循这样的约定:返回值越高越好。因此,像 metrics.mean_squared_error 这样衡量模型与数据之间距离的指标,会以 'neg_mean_squared_error' 的形式提供,返回该指标的负值。

1、分类

字符串 函数 公式
accuracy metrics.accuracy_score a c c u r a c y ( y , y ^ ) = 1 n ∑ i = 0 n − 1 1 ( y ^ i = y i ) accuracy(y,\hat{y}) = \frac{1}{n}\sum_{i=0}^{n-1}1(\hat{y}_i=y_i) accuracy(y,y^)=n1∑i=0n−11(y^i=yi)
balanced_accuracy metrics.balanced_accuracy_score b a l a n c e d − a c c u r a c y = 1 2 ( T P T P + F N + T N T N + F P ) balanced-accuracy=\frac{1}{2}(\frac{TP}{TP+FN}+\frac{TN}{TN+FP}) balanced−accuracy=21(TP+FNTP+TN+FPTN)
top_k_accuracy metrics.top_k_accuracy_score t o p − k a c c u r a c y ( y , y ^ ) = 1 n ∑ i = 0 n − 1 ∑ j = 1 k 1 ( f ^ i , j = y i ) top-k\ \ accuracy(y,\hat{y}) = \frac{1}{n}\sum_{i=0}^{n-1}\sum_{j=1}^{k}1(\hat{f}_{i,j}=y_i) top−k accuracy(y,y^)=n1∑i=0n−1∑j=1k1(f^i,j=yi)
average_precision metrics.average_precision_score A P = ∑ n ( R n − R n − 1 ) P n AP = \sum_{n}(R_n-R_{n-1})P_n AP=∑n(Rn−Rn−1)Pn
neg_brier_score metrics.brier_score_loss B S = 1 n ∑ i = 0 n − 1 ( y i − p i ) 2 = 1 n ∑ i = 0 n − 1 ( y i − p r e d i c t _ p r o b a ( y = 1 ) ) 2 BS= \frac{1}{n}\sum_{i=0}^{n-1}(y_i-p_i)^2=\frac{1}{n}\sum_{i=0}^{n-1}(y_i-predict\_{proba}(y=1))^2 BS=n1∑i=0n−1(yi−pi)2=n1∑i=0n−1(yi−predict_proba(y=1))2
f1 metrics.f1_score F 1 = 2 ∗ T P 2 ∗ T P + F P + F N F1=\frac{2*TP}{2*TP+FP+FN} F1=2∗TP+FP+FN2∗TP (average parameter)
neg_log_loss metrics.log_loss L l o g ( y , p ) = − l o g P r ( y ∣ p ) = − ( y l o g ( p ) + ( 1 − y ) l o g ( 1 − p ) ) L_{log}(y,p)=-logPr(y|p)=-(ylog(p)+(1-y)log(1-p)) Llog(y,p)=−logPr(y∣p)=−(ylog(p)+(1−y)log(1−p)) L l o g ( Y , P ) = − l o g P r ( Y ∣ P ) = − 1 N ∑ i = 0 N − 1 ∑ k = 0 K − 1 y i , k l o g p i , k L_{log}(Y,P)=-logPr(Y|P)=-\frac{1}{N}\sum_{i=0}^{N-1}\sum_{k=0}^{K-1}y_{i,k}logp_{i,k} Llog(Y,P)=−logPr(Y∣P)=−N1∑i=0N−1∑k=0K−1yi,klogpi,k
precision metrics.precision_score P = T P T P + F P P=\frac{TP}{TP+FP} P=TP+FPTP
recall metrics.recall_score R = T P T P + F N R=\frac{TP}{TP+FN} R=TP+FNTP
jaccard metrics.jaccard_score J ( y , y ^ ) = y ⋂ y ^ y ⋃ y ^ J(y,\hat{y})=\frac{y\bigcap\hat{y}}{y\bigcup\hat{y}} J(y,y^)=y⋃y^y⋂y^
roc_auc metrics.roc_auc_score Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC) from prediction scores
相关推荐
牛客企业服务1 分钟前
2025年AI面试推荐榜单,数字化招聘转型优选
人工智能·python·算法·面试·职场和发展·金融·求职招聘
胡斌附体13 分钟前
linux测试端口是否可被外部访问
linux·运维·服务器·python·测试·端口测试·临时服务器
likeGhee1 小时前
python缓存装饰器实现方案
开发语言·python·缓存
项目題供诗1 小时前
黑马python(二十五)
开发语言·python
读书点滴1 小时前
笨方法学python -练习14
java·前端·python
笑衬人心。1 小时前
Ubuntu 22.04 修改默认 Python 版本为 Python3 笔记
笔记·python·ubuntu
Brduino脑机接口技术答疑2 小时前
脑机新手指南(二十一)基于 Brainstorm 的 MEG/EEG 数据分析(上篇)
数据挖掘·数据分析
蛋仔聊测试2 小时前
Playwright 中 Page 对象的常用方法详解
python
前端付豪2 小时前
17、自动化才是正义:用 Python 接管你的日常琐事
后端·python
jioulongzi2 小时前
记录一次莫名奇妙的跨域502(badgateway)错误
开发语言·python