机器学习(六) — 评估模型

Evaluate model

1 test set

  1. split the training set into training set and a test set
  2. the test set is used to evaluate the model

1. linear regression

compute test error

J t e s t ( w ⃗ , b ) = 1 2 m t e s t ∑ i = 1 m t e s t [ ( f ( x t e s t ( i ) ) − y t e s t ( i ) ) 2 ] J_{test}(\vec w, b) = \frac{1}{2m_{test}}\sum_{i=1}^{m_{test}} \left [ (f(x_{test}^{(i)}) - y_{test}^{(i)})^2 \right ] Jtest(w ,b)=2mtest1i=1∑mtest[(f(xtest(i))−ytest(i))2]

2. classification regression

compute test error

J t e s t ( w ⃗ , b ) = − 1 m t e s t ∑ i = 1 m t e s t [ y t e s t ( i ) l o g ( f ( x t e s t ( i ) ) ) + ( 1 − y t e s t ( i ) ) l o g ( 1 − f ( x t e s t ( i ) ) ] J_{test}(\vec w, b) = -\frac{1}{m_{test}}\sum_{i=1}^{m_{test}} \left [ y_{test}^{(i)}log(f(x_{test}^{(i)})) + (1 - y_{test}^{(i)})log(1 - f(x_{test}^{(i)}) \right ] Jtest(w ,b)=−mtest1i=1∑mtest[ytest(i)log(f(xtest(i)))+(1−ytest(i))log(1−f(xtest(i))]

2 cross-validation set

  1. split the training set into training set, cross-validation set and test set
  2. the cross-validation set is used to automatically choose the better model, and the test set is used to evaluate the model that chosed

3 bias and variance

  1. high bias: J t r a i n J_{train} Jtrain and J c v J_{cv} Jcv is both high
  2. high variance: J t r a i n J_{train} Jtrain is low, but J c v J_{cv} Jcv is high
  1. if high bias: get more training set is helpless
  2. if high variance: get more training set is helpful

4 regularization

  1. if λ \lambda λ is too small, it will lead to overfitting(high variance)
  2. if λ \lambda λ is too large, it will lead to underfitting(high bias)

5 method

  1. fix high variance:
    • get more training set
    • try smaller set of features
    • reduce some of the higher-order terms
    • increase λ \lambda λ
  2. fix high bias:
    • get more addtional features
    • add polynomial features
    • decrease λ \lambda λ

6 neural network and bias variance

  1. a bigger network means a more complex model, so it will solve the high bias
  2. more data is helpful to solve high variance
  1. it turns out that a bigger(may be overfitting) and well regularized neural network is better than a small neural network
相关推荐
强盛小灵通专卖员18 分钟前
DL00219-基于深度学习的水稻病害检测系统含源码
人工智能·深度学习·水稻病害
Luke Ewin25 分钟前
CentOS7.9部署FunASR实时语音识别接口 | 部署商用级别实时语音识别接口FunASR
人工智能·语音识别·实时语音识别·商用级别实时语音识别
Joern-Lee1 小时前
初探机器学习与深度学习
人工智能·深度学习·机器学习
云卓SKYDROID1 小时前
无人机数据处理与特征提取技术分析!
人工智能·科技·无人机·科普·云卓科技
R²AIN SUITE1 小时前
金融合规革命:R²AIN SUITE 如何重塑银行业务智能
大数据·人工智能
Code_流苏1 小时前
《Python星球日记》 第69天:生成式模型(GPT 系列)
python·gpt·深度学习·机器学习·自然语言处理·transformer·生成式模型
新知图书1 小时前
DeepSeek基于注意力模型的可控图像生成
人工智能·深度学习·计算机视觉
白熊1882 小时前
【计算机视觉】OpenCV实战项目: Fire-Smoke-Dataset:基于OpenCV的早期火灾检测项目深度解析
人工智能·opencv·计算机视觉
↣life♚2 小时前
从SAM看交互式分割与可提示分割的区别与联系:Interactive Segmentation & Promptable Segmentation
人工智能·深度学习·算法·sam·分割·交互式分割
zqh176736464692 小时前
2025年阿里云ACP人工智能高级工程师认证模拟试题(附答案解析)
人工智能·算法·阿里云·人工智能工程师·阿里云acp·阿里云认证·acp人工智能