GPT演变:从GPT到ChatGPT

Transformer

论文

Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder... https://arxiv.org/abs/1706.03762

The Illustrated Transformer

The Illustrated Transformer Discussions: Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments) Translations: Arabic, Chinese (Simplified) 1, Chinese (Simplified) 2, French 1, French 2, Japanese, https://jalammar.github.io/illustrated-transformer/

The Annotated Transformer

The Annotated Transformer (harvard.edu)

GPT Series

GPT-1: Improving Language Understanding by Generative Pre-Training

预训练+微调

Abstract: We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fifine-tuning on each specifific task.

1. Unsupervised pre-training

2. Supervised fine-tuning

(left) Transformer architecture and training objectives used in this work. (right) Input transformations for fifine-tuning on different tasks. We convert all structured inputs into token sequences to be processed by our pre-trained model, followed by a linear+softmax layer.

GPT2: Language Models are Unsupervised Multitask Learners

We demonstrate language models can perform down-stream tasks in a zero-shot setting -- without any parameter or architecture modification.

主要的变化:训练集WebText

GPT3: Language Models are Few-Shot Learners

Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches. Specififically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.

Zero-shot, one-shot and few-shot, contrasted with traditional fine-tuning. The panels above show four methods for performing a task with a language model -- fine-tuning is the traditional method, whereas zero-, one-, and few-shot, which we study in this work, require the model to perform the task with only forward passes at test time. We typically present the model with a few dozen examples in the few shot setting.

训练集:

NLP中迁移学习方式的演变

  1. word2vec (embedding): word vectors were learned and used as inputs to task-specifific architectures
  2. the contextual representations of recurrent networks were transferred (still applied to task-specifific architectures)
  3. pre-trained recurrent or transformer language models have been directly fine-tuned, entirely removing the need for task-specific architectures

预训练+微调方法的限制:为了在特定任务上获得更好的效果,需要在特定于该任务、有成千上万到数十万个样本的数据集上进行微调

ChatGPT

Reinforcement Learning from Human Feedback (RLHF)

参考:ChatGPT 背后的"功臣"------RLHF 技术详解 (huggingface.co)

RLHF的思想:以强化学习方式依据人类反馈优化语言模型。RLHF使得在一般文本数据语料库上训练的语言模型能和复杂的人类价值观对齐。

RLHF是一项涉及多个模型和不同训练阶段的复杂概念,这里我们按三个步骤分解:

  1. 预训练一个语言模型 (LM) ;

  2. 聚合问答数据并训练一个奖励模型 (Reward Model,RM) ;

    RM的训练是RLHF区别于旧范式的开端。这一模型接收一系列文本并返回一个标量奖励,数值上对应人的偏好。我们可以用端到端的方式用LM建模,或者用模块化的系统建模 (比如对输出进行排名,再将排名转换为奖励) 。这一奖励数值将对后续无缝接入现有的RL算法至关重要。

  3. 用强化学习 (RL) 方式微调 LM。

相关推荐
光锥智能3 分钟前
买即梦送豆包?拆解字节AI收费的密码
人工智能
北京宇音天下3 分钟前
骑行升级!VTX316语音合成芯片,让电动车秒变“智能出行伙伴”
人工智能·语音识别
ishangy18 分钟前
智慧港口人员作业安全模块AI视觉解决方案
人工智能·ai视觉解决方案·智慧港口·ai监控
wltx168820 分钟前
谷歌SEO如何做插床优化?
大数据·人工智能·python
05大叔25 分钟前
文本匹配任务
人工智能
DavidSoCool27 分钟前
Spring AI Alibaba ReactAgent 调用Tool 实现多轮对话
java·人工智能·spring·多轮对话·reactagent
Tassel_YUE27 分钟前
小米 MiMo 百万亿 Token 活动怎么申请?逐步填写指南 + 高额度申请思路
人工智能·ai
imbackneverdie30 分钟前
分享我读博时常用的几款科研绘图软件
人工智能·信息可视化·ai作画·科研绘图·博士·ai工具·科研工具
zzzzzz31043 分钟前
深度解析 AgentMemory:让 AI 编码助手拥有「永久记忆」的工程实践
人工智能
大模型推理1 小时前
Nano-vLLM 源码解读 - 2. Sequence 状态机与请求生命周期
人工智能