【Machine Learning】Generalization Theory

本笔记基于清华大学《机器学习》的课程讲义中泛化理论相关部分,基本为笔者在考试前一两天所作的Cheat Sheet。内容较多,并不详细,主要作为复习和记忆的资料。

No free lunch

  • For algroithm A ′ A' A′, exsits f f f that is perfect answer of D ∈ C × { 0 , 1 } D\in C\times\{0,1\} D∈C×{0,1}, such that L D ( f ) = 0 L_D(f)=0 LD(f)=0 and

E S ∼ D m [ L D ( A ′ ( S ) ) ] ≥ 1 4 E_{S\sim D^m}[L_D(A'(S))]\ge \frac{1}{4} ES∼Dm[LD(A′(S))]≥41

  • Then Pr ⁡ [ L D ( A ′ ( S ) ) ≥ 1 8 ] ≥ 1 7 \Pr[L_D(A'(S))\ge \frac{1}{8}]\ge \frac{1}{7} Pr[LD(A′(S))≥81]≥71

  • Proof:
    max ⁡ i E S ∼ D i m [ L D ( A ′ ( S ) ) ] = max ⁡ i 1 k ∑ i = 1 k L D i ( A ′ ( S i ) ) ≥ 1 T ∑ j = 1 T 1 k ∑ i = 1 k L D j ( A ′ ( S i ) ) ≥ 1 k ∑ i = 1 k 1 T ∑ j = 1 T L D j ( A ′ ( S i ) ) ≥ min ⁡ S 1 T ∑ j = 1 T L D j ( A ′ ( S ) ) ≥ min ⁡ S 1 T ∑ j = 1 T 1 2 m ∑ i = 1 p 1 A ′ wrong at v i ≥ min ⁡ S 1 T ∑ j = 1 T 1 2 p ∑ i = 1 p 1 A ′ wrong at v i ≥ 1 2 min ⁡ S 1 T ∑ j = 1 T min ⁡ i 1 A ′ wrong at v i ≥ 1 4 \begin{align*} \max_{i}E_{S\sim D_i^m}[L_D(A'(S))]&=\max_{i}\frac{1}{k}\sum_{i=1}^k L_{D_i}(A'(S_i))\\ &\ge \frac{1}{T}\sum_{j=1}^T\frac{1}{k}\sum_{i=1}^k L_{D_j}(A'(S_i))\\ &\ge \frac{1}{k}\sum_{i=1}^k\frac{1}{T}\sum_{j=1}^T L_{D_j}(A'(S_i))\\ &\ge \min_S\frac{1}{T}\sum_{j=1}^T L_{D_j}(A'(S))\\ &\ge \min_S\frac{1}{T}\sum_{j=1}^T \frac{1}{2m}\sum_{i=1}^p1_{A'\text{ wrong at }v_i}\\ &\ge \min_S\frac{1}{T}\sum_{j=1}^T \frac{1}{2p}\sum_{i=1}^p1_{A'\text{ wrong at }v_i}\\ &\ge \frac{1}{2}\min_S\frac{1}{T}\sum_{j=1}^T \min_{i} 1_{A'\text{ wrong at }v_i}\\ &\ge \frac{1}{4} \end{align*} imaxES∼Dim[LD(A′(S))]=imaxk1i=1∑kLDi(A′(Si))≥T1j=1∑Tk1i=1∑kLDj(A′(Si))≥k1i=1∑kT1j=1∑TLDj(A′(Si))≥SminT1j=1∑TLDj(A′(S))≥SminT1j=1∑T2m1i=1∑p1A′ wrong at vi≥SminT1j=1∑T2p1i=1∑p1A′ wrong at vi≥21SminT1j=1∑Timin1A′ wrong at vi≥41

    • The last inequality is beause divide T T T into 2 2 2 parts. One pair f i , f i ′ f_i,f_{i'} fi,fi′only differs at v i v_i vi.

ERM

  • With realizable assumption, the hypothesis class found by ERM is good enough with at least some samples

    • Consider the probability of bad samples L S ( h S ) = L S ( h ∗ ) = 0 L_S(h_S)=L_S(h^*)=0 LS(hS)=LS(h∗)=0 but L D , f ( h S ) > ϵ L_{D,f}(h_S)>\epsilon LD,f(hS)>ϵ. Then we need S S S to be the union(apply union bound) of misleading set L S ( h S ) = 0 L_S(h_S)=0 LS(hS)=0, each sample has probability ≤ 1 − ϵ \le 1-\epsilon ≤1−ϵ. Then probability is ∣ H B ∣ ( 1 − ϵ ) m |H_B|(1-\epsilon)^m ∣HB∣(1−ϵ)m
  • PAC learnable: As sample number m ≥ m ( ϵ , δ ) m\ge m(\epsilon,\delta) m≥m(ϵ,δ), w.p. 1 − δ 1-\delta 1−δ we can find a h h h such that L D , f ( h ) ≤ ϵ L_{D,f}(h)\le \epsilon LD,f(h)≤ϵ.

    • Agnostic PAC learnable: L D ( h ) ≤ L D ( h ∗ ) + ϵ L_{D}(h)\le L_{D}(h^*)+\epsilon LD(h)≤LD(h∗)+ϵ
  • VC dimension

Rademacher

  • Generalization:
    L D ( h ) − L S ( h ) ≤ 2 E S ′ ∼ D m R ( l ∘ H ∘ S ′ ) + c 2 ln ⁡ 2 δ m L_D(h)-L_S(h)\le 2E_{S'\sim D^m}R(l\circ H\circ S')+c\sqrt{\frac{2\ln\frac{2}{\delta}}{m}} LD(h)−LS(h)≤2ES′∼DmR(l∘H∘S′)+cm2lnδ2

  • Massart Lemma:
    R ( A ) ≤ max ⁡ a ∈ A ∣ a − a ˉ ∣ 2 log ⁡ N m R(A)\le \max_{a\in A}|a-\bar{a}|\frac{\sqrt{2\log N}}{m} R(A)≤a∈Amax∣a−aˉ∣m2logN

  • Contraction Lemma: If ϕ \phi ϕ is ρ \rho ρ-lipschitz, then
    R ( ϕ ∘ A ) ≤ ρ R ( A ) R(\phi\circ A)\le \rho R(A) R(ϕ∘A)≤ρR(A)

相关推荐
小气小憩5 分钟前
“暗战”百度搜索页:Monica悬浮球被“围剿”,一场AI Agent与传统巨头的流量攻防战
前端·人工智能
神经星星12 分钟前
准确度提升400%!印度季风预测模型基于36个气象站点,实现城区尺度精细预报
人工智能
IT_陈寒3 小时前
JavaScript 性能优化:5 个被低估的 V8 引擎技巧让你的代码快 200%
前端·人工智能·后端
Juchecar3 小时前
一文讲清 PyTorch 中反向传播(Backpropagation)的实现原理
人工智能
黎燃3 小时前
游戏NPC的智能行为设计:从规则驱动到强化学习的演进
人工智能
机器之心4 小时前
高阶程序,让AI从技术可行到商业可信的最后一公里
人工智能·openai
martinzh4 小时前
解锁RAG高阶密码:自适应、多模态、个性化技术深度剖析
人工智能
机器之心4 小时前
刚刚,李飞飞空间智能新成果震撼问世!3D世界生成进入「无限探索」时代
人工智能·openai
scilwb4 小时前
Isaac Sim机械臂教程 - 阶段1:基础环境搭建与机械臂加载
人工智能·开源
舒一笑4 小时前
TorchV企业级AI知识引擎的三大功能支柱:从构建到运营的技术解析
人工智能