大模型综述论文笔记6-15

这里写自定义目录标题

  • Keywords
  • [Backgroud for LLMs](#Backgroud for LLMs)
    • [Technical Evolution of GPT-series Models](#Technical Evolution of GPT-series Models)
      • [Research of OpenAI on LLMs can be roughly divided into the following stages](#Research of OpenAI on LLMs can be roughly divided into the following stages)
        • [Early Explorations](#Early Explorations)
        • [Capacity Leap](#Capacity Leap)
        • [Capacity Enhancement](#Capacity Enhancement)
        • [The Milestones of Language Models](#The Milestones of Language Models)
  • Resources
  • Pre-training
    • [Data Collection](#Data Collection)
    • [Data Preprocessing](#Data Preprocessing)

Keywords

GPT:Generative Pre-Training

Backgroud for LLMs

Technical Evolution of GPT-series Models

Two key points to GPT's success are (I) training decoder-onlly Transformer language models that can accurately predict the next word and (II) scaling up the size of language models

Research of OpenAI on LLMs can be roughly divided into the following stages

Early Explorations

Capacity Leap

ICT

Capacity Enhancement

1.training on code data

Codex: a GPT model fine-tuned on a large corpus of GitHub

code
2.alignment with human preference

reinforcement learning from human feedback (RLHF) algorithm

Note that it seems that the wording of "instruction tuning" has seldom

been used in OpenAI's paper and documentation, which is substituted by

supervised fine-tuning on human demonstrations (i.e., the first step

of the RLHF algorithm).

The Milestones of Language Models

chatGPT(based on gpt3.5 and gpt4) and GPT-4(multimodal)

Resources

Stanford Alpaca is the first open instruct-following model fine-tuned based on LLaMA (7B).

Alpaca LoRA (a reproduction of Stanford Alpaca using LoRA)

model 、data、library

Pre-training

Data Collection

General Text Data:webpages, books, and conversational text

Specialized Text Data:Multilingual text, Scientific text, Code

Data Preprocessing

Quality Filtering

  1. The former approach trains a selection classifier based on highquality texts and leverages it to identify and filter out low quality data.
  2. heuristic based approaches to eliminate low-quality texts through a set of well-designed rules: Language based filtering, Metric based filtering, Statistic based filtering, Keyword based filtering

De-duplication

Existing work has found that duplicate data in a corpus would reduce the diversity of language models, which may cause the training process to become unstable and thus affect the model performance.

  1. Privacy Redaction: (PII:personally identifiable information )
  2. Tokenization:(It aims to segment raw text into sequences of individual tokens, which are subsequently used as the inputs of LLMs.) Byte-Pair Encoding (BPE) tokenization; WordPiece tokenization; WordPiece tokenization
相关推荐
WoooChi12 小时前
【论文阅读】根据曲线进行 3D 曲面建模
论文阅读·3d
Do1phln17 小时前
论文阅读 - 《Large Language Models Are Zero-Shot Time Series Forecasters》
论文阅读·人工智能·语言模型
小嗷犬17 小时前
【论文笔记】Visual Alignment Pre-training for Sign Language Translation
论文阅读·人工智能·机器翻译·多模态·手语翻译·手语识别
请站在我身后2 天前
最新的强大的文生视频模型Pyramid Flow 论文阅读及复现
论文阅读·人工智能·神经网络·计算机视觉·stable diffusion·transformer
爱补鱼的猫猫2 天前
6、InstructGPT论文笔记(人类反馈指令,对齐)
论文阅读
NONE-C2 天前
加电:DETR论文阅读
论文阅读
prinTao2 天前
【论文阅读】DynamicControl :一种新的controlnet多条件控制方法
论文阅读
爱补鱼的猫猫2 天前
2、Bert论文笔记
论文阅读·人工智能·bert
dundunmm2 天前
论文阅读:Deep Fusion Clustering Network With Reliable Structure Preservation
论文阅读·人工智能·数据挖掘·聚类·深度聚类·图聚类
白光白光2 天前
论文阅读--Variational quantum algorithms
论文阅读·量子计算·变分量子算法 综述