GPT演变:从GPT到ChatGPT

Transformer

论文

Attention Is All You Need The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder... https://arxiv.org/abs/1706.03762

The Illustrated Transformer

The Illustrated Transformer Discussions: Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments) Translations: Arabic, Chinese (Simplified) 1, Chinese (Simplified) 2, French 1, French 2, Japanese, https://jalammar.github.io/illustrated-transformer/

The Annotated Transformer

The Annotated Transformer (harvard.edu)

GPT Series

GPT-1: Improving Language Understanding by Generative Pre-Training

预训练+微调

Abstract: We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fifine-tuning on each specifific task.

1. Unsupervised pre-training

2. Supervised fine-tuning

(left) Transformer architecture and training objectives used in this work. (right) Input transformations for fifine-tuning on different tasks. We convert all structured inputs into token sequences to be processed by our pre-trained model, followed by a linear+softmax layer.

GPT2: Language Models are Unsupervised Multitask Learners

We demonstrate language models can perform down-stream tasks in a zero-shot setting -- without any parameter or architecture modification.

主要的变化:训练集WebText

GPT3: Language Models are Few-Shot Learners

Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches. Specififically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting.

Zero-shot, one-shot and few-shot, contrasted with traditional fine-tuning. The panels above show four methods for performing a task with a language model -- fine-tuning is the traditional method, whereas zero-, one-, and few-shot, which we study in this work, require the model to perform the task with only forward passes at test time. We typically present the model with a few dozen examples in the few shot setting.

训练集:

NLP中迁移学习方式的演变

  1. word2vec (embedding): word vectors were learned and used as inputs to task-specifific architectures
  2. the contextual representations of recurrent networks were transferred (still applied to task-specifific architectures)
  3. pre-trained recurrent or transformer language models have been directly fine-tuned, entirely removing the need for task-specific architectures

预训练+微调方法的限制:为了在特定任务上获得更好的效果,需要在特定于该任务、有成千上万到数十万个样本的数据集上进行微调

ChatGPT

Reinforcement Learning from Human Feedback (RLHF)

参考:ChatGPT 背后的"功臣"------RLHF 技术详解 (huggingface.co)

RLHF的思想:以强化学习方式依据人类反馈优化语言模型。RLHF使得在一般文本数据语料库上训练的语言模型能和复杂的人类价值观对齐。

RLHF是一项涉及多个模型和不同训练阶段的复杂概念,这里我们按三个步骤分解:

  1. 预训练一个语言模型 (LM) ;

  2. 聚合问答数据并训练一个奖励模型 (Reward Model,RM) ;

    RM的训练是RLHF区别于旧范式的开端。这一模型接收一系列文本并返回一个标量奖励,数值上对应人的偏好。我们可以用端到端的方式用LM建模,或者用模块化的系统建模 (比如对输出进行排名,再将排名转换为奖励) 。这一奖励数值将对后续无缝接入现有的RL算法至关重要。

  3. 用强化学习 (RL) 方式微调 LM。

相关推荐
Lethehong几秒前
CANN ops-nn仓库深度解读:AIGC时代的神经网络算子优化实践
人工智能·神经网络·aigc
那个村的李富贵2 分钟前
玩转CANN仓库:60行代码打造国产化AIGC商品标签智能生成器
aigc·cann
开开心心就好2 分钟前
AI人声伴奏分离工具,离线提取伴奏K歌用
java·linux·开发语言·网络·人工智能·电脑·blender
TechWJ2 分钟前
CANN ops-nn神经网络算子库技术剖析:NPU加速的基石
人工智能·深度学习·神经网络·cann·ops-nn
凌杰3 分钟前
AI 学习笔记:LLM 的部署与测试
人工智能
心疼你的一切3 分钟前
拆解 CANN 仓库:实现 AIGC 文本生成昇腾端部署
数据仓库·深度学习·aigc·cann
心易行者5 分钟前
在 Claude 4.6 发布的当下,一个不懂编程的人聊聊 Claude Code:当 AI 终于学会自己动手干活
人工智能
子榆.6 分钟前
CANN 性能分析与调优实战:使用 msprof 定位瓶颈,榨干硬件每一分算力
大数据·网络·人工智能
爱喝白开水a6 分钟前
前端AI自动化测试:brower-use调研让大模型帮你做网页交互与测试
前端·人工智能·大模型·prompt·交互·agent·rag
学易10 分钟前
第十五节.别人的工作流,如何使用和调试(上)?(2类必现报错/缺失节点/缺失模型/思路/实操/通用调试步骤)
人工智能·ai作画·stable diffusion·报错·comfyui·缺失节点