深度学习入门(9) - Reinforcement Learning 强化学习

Reinforcement Learning

an agent performs actions in environment, and receives rewards

goal: Learn how to take actions that maximize reward

Stochasticity: Rewards and state transitions may be random

Credit assignment : Reward r t r_t rt may not directly depend on action a t a_t at

Nondifferentiable: Can't backprop through the world

Nonstationary: What the agent experiences depends on how it acts

Markov Decision Process (MDP)

Mathematical formalization of the RL problem: A tuple ( S , A , R , P , γ ) (S,A,R,P,\gamma) (S,A,R,P,γ)

S S S: Set of possible states

A A A: Set of possible actions

R R R: Distribution of reward given (state, action) pair

P P P: Transition probability: distribution over next state given (state, action)

γ \gamma γ: Discount factor (trade-off between future and present rewards)

Markov Property: The current state completely characterizes the state of the world. Rewards and next states depend only on current state, not history.

Agent executes a policy π \pi π giving distribution of actions conditioned on states.

Goal : Find best policy that maximizes cumulative discounted reward ∑ t γ t r t \sum_t \gamma^tr_t ∑tγtrt

We will try to find the maximal expected sum of rewards to reduce the randomness.

Value function V π ( s ) V^{\pi}(s) Vπ(s): expected cumulative reward from following policy π \pi π from state s s s

Q function Q π ( s , a ) Q^{ \pi}(s,a) Qπ(s,a) : expected cumulative reward from following policy π \pi π from taking action a a a in state s s s

Bellman Equation

After taking action a in state s, we get reward r and move to a new state s'. After that, the max possible reward we can get is max ⁡ a ′ Q ∗ ( s ′ , a ′ ) \max_{a'} Q^*(s',a') maxa′Q∗(s′,a′)

Idea: find a function that satisfy Bellman equation then it must be optimal

start with a random Q, and use Bellman equation as an update rule.

But if the state is large/infinite, we can't iterate them.

Approximate Q(s, a) with a neural network, use Bellman equation as loss function.

-> Deep q learning

Policy Gradients

Train a network π θ ( a , s ) \pi_{\theta}(a,s) πθ(a,s) that takes state as input, gives distribution over which action to take

Objective function: Expected future rewards when following policy π θ \pi_{\theta} πθ

Use gradient ascent -> play some tricks to make it differentiable

Other approaches:

Actor-Critic

Model-Based

Imitation Learning

Inverse Reinforcement Learning

Adversarial Learning

...

Stochastic computation graphs

相关推荐
cici158741 小时前
卡尔曼滤波器实现RBF神经网络训练
人工智能·深度学习·神经网络
Neolnfra4 小时前
拒绝数据“裸奔”!把顶级AI装进自己的硬盘,这款神仙开源工具我粉了
人工智能·开源·蓝耘maas
code_li4 小时前
只花了几分钟,用AI开发了一个微信小程序!(附教程)
人工智能·微信小程序·小程序
飞Link4 小时前
瑞萨联姻 Irida Labs:嵌入式开发者如何玩转“端侧视觉 AI”新范式?
人工智能
RSTJ_16255 小时前
PYTHON+AI LLM DAY THREETY-SEVEN
开发语言·人工智能·python
郝学胜-神的一滴5 小时前
深度学习优化核心:梯度下降与网络训练全解析
数据结构·人工智能·python·深度学习·算法·机器学习
Aision_5 小时前
Agent 为什么需要 Checkpoint?
人工智能·python·gpt·langchain·prompt·aigc·agi
小贺儿开发5 小时前
《唐朝诡事录之长安》——盛世马球
人工智能·unity·ai·shader·绘画·影视·互动
秋95 小时前
ESP32 与 Air780E 4G 模块配合做 MQTT 数据传输
人工智能
DeepFlow 零侵扰全栈可观测5 小时前
运动战:AI 时代 IT 运维的决胜之道——DeepFlow 业务全链路可观测性的落地实践
运维·网络·人工智能·arcgis·云计算