机器学习(六) — 评估模型

Evaluate model

1 test set

  1. split the training set into training set and a test set
  2. the test set is used to evaluate the model

1. linear regression

compute test error

J t e s t ( w ⃗ , b ) = 1 2 m t e s t ∑ i = 1 m t e s t [ ( f ( x t e s t ( i ) ) − y t e s t ( i ) ) 2 ] J_{test}(\vec w, b) = \frac{1}{2m_{test}}\sum_{i=1}^{m_{test}} \left [ (f(x_{test}^{(i)}) - y_{test}^{(i)})^2 \right ] Jtest(w ,b)=2mtest1i=1∑mtest[(f(xtest(i))−ytest(i))2]

2. classification regression

compute test error

J t e s t ( w ⃗ , b ) = − 1 m t e s t ∑ i = 1 m t e s t [ y t e s t ( i ) l o g ( f ( x t e s t ( i ) ) ) + ( 1 − y t e s t ( i ) ) l o g ( 1 − f ( x t e s t ( i ) ) ] J_{test}(\vec w, b) = -\frac{1}{m_{test}}\sum_{i=1}^{m_{test}} \left [ y_{test}^{(i)}log(f(x_{test}^{(i)})) + (1 - y_{test}^{(i)})log(1 - f(x_{test}^{(i)}) \right ] Jtest(w ,b)=−mtest1i=1∑mtest[ytest(i)log(f(xtest(i)))+(1−ytest(i))log(1−f(xtest(i))]

2 cross-validation set

  1. split the training set into training set, cross-validation set and test set
  2. the cross-validation set is used to automatically choose the better model, and the test set is used to evaluate the model that chosed

3 bias and variance

  1. high bias: J t r a i n J_{train} Jtrain and J c v J_{cv} Jcv is both high
  2. high variance: J t r a i n J_{train} Jtrain is low, but J c v J_{cv} Jcv is high
  1. if high bias: get more training set is helpless
  2. if high variance: get more training set is helpful

4 regularization

  1. if λ \lambda λ is too small, it will lead to overfitting(high variance)
  2. if λ \lambda λ is too large, it will lead to underfitting(high bias)

5 method

  1. fix high variance:
    • get more training set
    • try smaller set of features
    • reduce some of the higher-order terms
    • increase λ \lambda λ
  2. fix high bias:
    • get more addtional features
    • add polynomial features
    • decrease λ \lambda λ

6 neural network and bias variance

  1. a bigger network means a more complex model, so it will solve the high bias
  2. more data is helpful to solve high variance
  1. it turns out that a bigger(may be overfitting) and well regularized neural network is better than a small neural network
相关推荐
code_pgf1 分钟前
Jetson Orin NX 16GB 的推荐传感器组合 + 资源预算 + 软件栈安装顺序(humble)
人工智能·数码相机
源码学社4 分钟前
[特殊字符] 字节跳动开源 DeerFlow:一个“深度研究型 AI Agent 框架”详解
人工智能
AINative软件工程5 分钟前
Structured Outputs 实战:让大模型稳定输出 JSON 的三种方案对比
人工智能
Roselind_Yi7 分钟前
从线性回归实战到Python依赖安装踩坑:我的机器学习入门排雷记
笔记·python·算法·机器学习·回归·线性回归·学习方法
Entropy-Go7 分钟前
一图了解AI热门词汇 - OpenClaw/Prompt/Agent/Skill/MCP/LLM/GPU
人工智能·agent·skill·mcp·openclaw
惠惠软件9 分钟前
AI 龙虾 | 对学习工作的影响和未来前瞻
人工智能·学习
是糖糖啊10 分钟前
Agent 不好用?先别怪模型,试试 Harness Engineering
人工智能·设计模式
X在敲AI代码11 分钟前
女娲补天系列--深度学习
人工智能·深度学习
AI精钢13 分钟前
从 Prompt Engineering 到 Harness Engineering:AI 系统竞争,正在从“会写提示词”转向“会搭执行框架”
人工智能·prompt·devops·ai agent·ai engineering
大灰狼来喽14 分钟前
OpenClaw 自动化工作流实战:用 Hooks + 定时任务 + Multi-MCP 构建“数字员工“
大数据·运维·人工智能·自动化·aigc·ai编程