大模型综述论文笔记6-15

这里写自定义目录标题

  • Keywords
  • [Backgroud for LLMs](#Backgroud for LLMs)
    • [Technical Evolution of GPT-series Models](#Technical Evolution of GPT-series Models)
      • [Research of OpenAI on LLMs can be roughly divided into the following stages](#Research of OpenAI on LLMs can be roughly divided into the following stages)
        • [Early Explorations](#Early Explorations)
        • [Capacity Leap](#Capacity Leap)
        • [Capacity Enhancement](#Capacity Enhancement)
        • [The Milestones of Language Models](#The Milestones of Language Models)
  • Resources
  • Pre-training
    • [Data Collection](#Data Collection)
    • [Data Preprocessing](#Data Preprocessing)

Keywords

GPT:Generative Pre-Training

Backgroud for LLMs

Technical Evolution of GPT-series Models

Two key points to GPT's success are (I) training decoder-onlly Transformer language models that can accurately predict the next word and (II) scaling up the size of language models

Research of OpenAI on LLMs can be roughly divided into the following stages

Early Explorations

Capacity Leap

ICT

Capacity Enhancement

1.training on code data

Codex: a GPT model fine-tuned on a large corpus of GitHub

code
2.alignment with human preference

reinforcement learning from human feedback (RLHF) algorithm

Note that it seems that the wording of "instruction tuning" has seldom

been used in OpenAI's paper and documentation, which is substituted by

supervised fine-tuning on human demonstrations (i.e., the first step

of the RLHF algorithm).

The Milestones of Language Models

chatGPT(based on gpt3.5 and gpt4) and GPT-4(multimodal)

Resources

Stanford Alpaca is the first open instruct-following model fine-tuned based on LLaMA (7B).

Alpaca LoRA (a reproduction of Stanford Alpaca using LoRA)

model 、data、library

Pre-training

Data Collection

General Text Data:webpages, books, and conversational text

Specialized Text Data:Multilingual text, Scientific text, Code

Data Preprocessing

Quality Filtering

  1. The former approach trains a selection classifier based on highquality texts and leverages it to identify and filter out low quality data.
  2. heuristic based approaches to eliminate low-quality texts through a set of well-designed rules: Language based filtering, Metric based filtering, Statistic based filtering, Keyword based filtering

De-duplication

Existing work has found that duplicate data in a corpus would reduce the diversity of language models, which may cause the training process to become unstable and thus affect the model performance.

  1. Privacy Redaction: (PII:personally identifiable information )
  2. Tokenization:(It aims to segment raw text into sequences of individual tokens, which are subsequently used as the inputs of LLMs.) Byte-Pair Encoding (BPE) tokenization; WordPiece tokenization; WordPiece tokenization
相关推荐
byzy11 小时前
【论文笔记】RC-GeoCP: Geometric Consensus for Radar-Camera Collaborative Perception
论文阅读·深度学习·计算机视觉·自动驾驶
赵庆明老师14 小时前
CSSCI论文写作17:论文当中的图、表、参考文献
论文阅读·论文写作
CS_Zero1 天前
无人机路径规划算法——EGO-planner建模总结—— EGO-planner 论文笔记(一)
论文阅读·算法·无人机
STLearner2 天前
WSDM 2026 | 时间序列(Time Series)论文总结【预测,表示学习,因果】
大数据·论文阅读·人工智能·深度学习·学习·机器学习·数据挖掘
大模型最新论文速读3 天前
合成数据的正确打开方式:格式比模型重要,小模型比大模型好用
论文阅读·人工智能·深度学习·机器学习·自然语言处理
m0_743106463 天前
【浙大&南洋理工最新综述】Feed-Forward 3D Scene Modeling(一)
论文阅读·人工智能·计算机视觉·3d·几何学
Zik----3 天前
中文论文写作格式
论文阅读
CV-杨帆5 天前
论文阅读:arxiv 2026 Security Considerations for Artificial Intelligence Agents
论文阅读
Marlowee5 天前
UI-Ins 论文深度解读:Instruction-as-Reasoning 范式与 GUI Grounding 的多视角推理
论文阅读
赵庆明老师5 天前
CSSCI论文写作14:如何用学术语言呈现论证
论文阅读·论文写作