大模型综述论文笔记6-15

这里写自定义目录标题

  • Keywords
  • [Backgroud for LLMs](#Backgroud for LLMs)
    • [Technical Evolution of GPT-series Models](#Technical Evolution of GPT-series Models)
      • [Research of OpenAI on LLMs can be roughly divided into the following stages](#Research of OpenAI on LLMs can be roughly divided into the following stages)
        • [Early Explorations](#Early Explorations)
        • [Capacity Leap](#Capacity Leap)
        • [Capacity Enhancement](#Capacity Enhancement)
        • [The Milestones of Language Models](#The Milestones of Language Models)
  • Resources
  • Pre-training
    • [Data Collection](#Data Collection)
    • [Data Preprocessing](#Data Preprocessing)

Keywords

GPT:Generative Pre-Training

Backgroud for LLMs

Technical Evolution of GPT-series Models

Two key points to GPT's success are (I) training decoder-onlly Transformer language models that can accurately predict the next word and (II) scaling up the size of language models

Research of OpenAI on LLMs can be roughly divided into the following stages

Early Explorations

Capacity Leap

ICT

Capacity Enhancement

1.training on code data

Codex: a GPT model fine-tuned on a large corpus of GitHub

code
2.alignment with human preference

reinforcement learning from human feedback (RLHF) algorithm

Note that it seems that the wording of "instruction tuning" has seldom

been used in OpenAI's paper and documentation, which is substituted by

supervised fine-tuning on human demonstrations (i.e., the first step

of the RLHF algorithm).

The Milestones of Language Models

chatGPT(based on gpt3.5 and gpt4) and GPT-4(multimodal)

Resources

Stanford Alpaca is the first open instruct-following model fine-tuned based on LLaMA (7B).

Alpaca LoRA (a reproduction of Stanford Alpaca using LoRA)

model 、data、library

Pre-training

Data Collection

General Text Data:webpages, books, and conversational text

Specialized Text Data:Multilingual text, Scientific text, Code

Data Preprocessing

Quality Filtering

  1. The former approach trains a selection classifier based on highquality texts and leverages it to identify and filter out low quality data.
  2. heuristic based approaches to eliminate low-quality texts through a set of well-designed rules: Language based filtering, Metric based filtering, Statistic based filtering, Keyword based filtering

De-duplication

Existing work has found that duplicate data in a corpus would reduce the diversity of language models, which may cause the training process to become unstable and thus affect the model performance.

  1. Privacy Redaction: (PII:personally identifiable information )
  2. Tokenization:(It aims to segment raw text into sequences of individual tokens, which are subsequently used as the inputs of LLMs.) Byte-Pair Encoding (BPE) tokenization; WordPiece tokenization; WordPiece tokenization
相关推荐
m0_650108248 小时前
MindDrive:融合世界模型与视觉语言模型的端到端自动驾驶框架
论文阅读·自动驾驶·轨迹生成与规划·世界动作模型·e2e-ad·vlm导向评估器·minddrive
CoookeCola8 小时前
无需抠图!Qwen-Image-Layered 一键分解图像图层,支持图层级精准编辑
论文阅读·深度学习·计算机视觉·ai作画·开源·视觉检测·aigc
bylander9 小时前
【论文阅读】VTP:Towards Scalable Pre-training of Visual Tokenizers for Generation
论文阅读·图像处理·大模型
czijin9 小时前
【论文阅读】LoRA: Low-Rank Adaptation of Large Language Models
论文阅读·人工智能·语言模型
有Li9 小时前
诊断文本引导的分层分类全玻片图像表征学习|文献速递-医疗影像分割与目标检测最新技术
论文阅读·深度学习·文献·医学生
万里鹏程转瞬至1 天前
论文简读:Qwen2.5-VL Technical Report
论文阅读·深度学习·多模态
万里鹏程转瞬至1 天前
论文简读:Qwen3-VL Technical Report | Qwen3VL技术报告
论文阅读·深度学习·多模态
墨绿色的摆渡人1 天前
论文笔记(一百一十二)Pos3R: 6D Pose Estimation for Unseen Objects Made Easy
论文阅读
c0d1ng1 天前
十二月第三周周报(论文阅读)
论文阅读
Xy-unu2 天前
[LLM]AIM: Adaptive Inference of Multi-Modal LLMs via Token Merging and Pruning
论文阅读·人工智能·算法·机器学习·transformer·论文笔记·剪枝