大模型综述论文笔记6-15

这里写自定义目录标题

  • Keywords
  • [Backgroud for LLMs](#Backgroud for LLMs)
    • [Technical Evolution of GPT-series Models](#Technical Evolution of GPT-series Models)
      • [Research of OpenAI on LLMs can be roughly divided into the following stages](#Research of OpenAI on LLMs can be roughly divided into the following stages)
        • [Early Explorations](#Early Explorations)
        • [Capacity Leap](#Capacity Leap)
        • [Capacity Enhancement](#Capacity Enhancement)
        • [The Milestones of Language Models](#The Milestones of Language Models)
  • Resources
  • Pre-training
    • [Data Collection](#Data Collection)
    • [Data Preprocessing](#Data Preprocessing)

Keywords

GPT:Generative Pre-Training

Backgroud for LLMs

Technical Evolution of GPT-series Models

Two key points to GPT's success are (I) training decoder-onlly Transformer language models that can accurately predict the next word and (II) scaling up the size of language models

Research of OpenAI on LLMs can be roughly divided into the following stages

Early Explorations

Capacity Leap

ICT

Capacity Enhancement

1.training on code data

Codex: a GPT model fine-tuned on a large corpus of GitHub

code
2.alignment with human preference

reinforcement learning from human feedback (RLHF) algorithm

Note that it seems that the wording of "instruction tuning" has seldom

been used in OpenAI's paper and documentation, which is substituted by

supervised fine-tuning on human demonstrations (i.e., the first step

of the RLHF algorithm).

The Milestones of Language Models

chatGPT(based on gpt3.5 and gpt4) and GPT-4(multimodal)

Resources

Stanford Alpaca is the first open instruct-following model fine-tuned based on LLaMA (7B).

Alpaca LoRA (a reproduction of Stanford Alpaca using LoRA)

model 、data、library

Pre-training

Data Collection

General Text Data:webpages, books, and conversational text

Specialized Text Data:Multilingual text, Scientific text, Code

Data Preprocessing

Quality Filtering

  1. The former approach trains a selection classifier based on highquality texts and leverages it to identify and filter out low quality data.
  2. heuristic based approaches to eliminate low-quality texts through a set of well-designed rules: Language based filtering, Metric based filtering, Statistic based filtering, Keyword based filtering

De-duplication

Existing work has found that duplicate data in a corpus would reduce the diversity of language models, which may cause the training process to become unstable and thus affect the model performance.

  1. Privacy Redaction: (PII:personally identifiable information )
  2. Tokenization:(It aims to segment raw text into sequences of individual tokens, which are subsequently used as the inputs of LLMs.) Byte-Pair Encoding (BPE) tokenization; WordPiece tokenization; WordPiece tokenization
相关推荐
觉醒大王1 天前
哪些文章会被我拒稿?
论文阅读·笔记·深度学习·考研·自然语言处理·html·学习方法
觉醒大王1 天前
强女思维:着急,是贪欲外显的相。
java·论文阅读·笔记·深度学习·学习·自然语言处理·学习方法
张较瘦_1 天前
[论文阅读] AI | 用机器学习给深度学习库“体检”:大幅提升测试效率的新思路
论文阅读·人工智能·机器学习
m0_650108242 天前
IntNet:面向协同自动驾驶的通信驱动多智能体强化学习框架
论文阅读·marl·多智能体系统·网联自动驾驶·意图共享·自适应通讯·端到端协同
m0_650108242 天前
Raw2Drive:基于对齐世界模型的端到端自动驾驶强化学习方案
论文阅读·机器人·强化学习·端到端自动驾驶·双流架构·引导机制·mbrl自动驾驶
快降重科研小助手2 天前
前瞻与规范:AIGC降重API的技术演进与负责任使用
论文阅读·aigc·ai写作·降重·降ai·快降重
源于花海3 天前
IEEE TIE期刊论文学习——基于元学习与小样本重训练的锂离子电池健康状态估计方法
论文阅读·元学习·电池健康管理·并行网络·小样本重训练
m0_650108243 天前
UniDrive-WM:自动驾驶领域的统一理解、规划与生成世界模型
论文阅读·自动驾驶·轨迹规划·感知、规划与生成融合·场景理解·未来图像生成
蓝田生玉1233 天前
LLaMA论文阅读笔记
论文阅读·笔记·llama
*西瓜3 天前
基于深度学习的视觉水位识别技术与装备
论文阅读·深度学习