深度学习入门(9) - Reinforcement Learning 强化学习

Reinforcement Learning

an agent performs actions in environment, and receives rewards

goal: Learn how to take actions that maximize reward

Stochasticity: Rewards and state transitions may be random

Credit assignment : Reward r t r_t rt may not directly depend on action a t a_t at

Nondifferentiable: Can't backprop through the world

Nonstationary: What the agent experiences depends on how it acts

Markov Decision Process (MDP)

Mathematical formalization of the RL problem: A tuple ( S , A , R , P , γ ) (S,A,R,P,\gamma) (S,A,R,P,γ)

S S S: Set of possible states

A A A: Set of possible actions

R R R: Distribution of reward given (state, action) pair

P P P: Transition probability: distribution over next state given (state, action)

γ \gamma γ: Discount factor (trade-off between future and present rewards)

Markov Property: The current state completely characterizes the state of the world. Rewards and next states depend only on current state, not history.

Agent executes a policy π \pi π giving distribution of actions conditioned on states.

Goal : Find best policy that maximizes cumulative discounted reward ∑ t γ t r t \sum_t \gamma^tr_t ∑tγtrt

We will try to find the maximal expected sum of rewards to reduce the randomness.

Value function V π ( s ) V^{\pi}(s) Vπ(s): expected cumulative reward from following policy π \pi π from state s s s

Q function Q π ( s , a ) Q^{ \pi}(s,a) Qπ(s,a) : expected cumulative reward from following policy π \pi π from taking action a a a in state s s s

Bellman Equation

After taking action a in state s, we get reward r and move to a new state s'. After that, the max possible reward we can get is max ⁡ a ′ Q ∗ ( s ′ , a ′ ) \max_{a'} Q^*(s',a') maxa′Q∗(s′,a′)

Idea: find a function that satisfy Bellman equation then it must be optimal

start with a random Q, and use Bellman equation as an update rule.

But if the state is large/infinite, we can't iterate them.

Approximate Q(s, a) with a neural network, use Bellman equation as loss function.

-> Deep q learning

Policy Gradients

Train a network π θ ( a , s ) \pi_{\theta}(a,s) πθ(a,s) that takes state as input, gives distribution over which action to take

Objective function: Expected future rewards when following policy π θ \pi_{\theta} πθ

Use gradient ascent -> play some tricks to make it differentiable

Other approaches:

Actor-Critic

Model-Based

Imitation Learning

Inverse Reinforcement Learning

Adversarial Learning

...

Stochastic computation graphs

相关推荐
YINWA AI1 分钟前
胤娲科技:谷歌DeepMind祭出蛋白质设计新AI——癌症治疗迎来曙光
人工智能·科技·ai
会飞的Anthony6 分钟前
基于Python的自然语言处理系列(14):TorchText + biGRU + Attention + Teacher Forcing
人工智能·自然语言处理
jun7788958 分钟前
机器学习-监督学习:朴素贝叶斯分类器
人工智能·学习·机器学习
FL16238631299 分钟前
基于yolov5的混凝土缺陷检测系统python源码+onnx模型+评估指标曲线+精美GUI界面
人工智能·python·yolo
Kenneth風车13 分钟前
【第十三章:Sentosa_DSML社区版-机器学习聚类】
人工智能·低代码·机器学习·数据分析·聚类
jndingxin20 分钟前
OpenCV运动分析和目标跟踪(4)创建汉宁窗函数createHanningWindow()的使用
人工智能·opencv·目标跟踪
机器之心22 分钟前
o1 带火的 CoT 到底行不行?新论文引发了论战
android·人工智能
机器之心28 分钟前
从架构、工艺到能效表现,全面了解 LLM 硬件加速,这篇综述就够了
android·人工智能
jndingxin1 小时前
OpenCV特征检测(1)检测图像中的线段的类LineSegmentDe()的使用
人工智能·opencv·计算机视觉
@月落1 小时前
alibaba获得店铺的所有商品 API接口
java·大数据·数据库·人工智能·学习