Build a Large Language Model
背景
第1章:理解大型语言模型
见前序文章【AI系列】从零开始学习大模型GPT (1)- Build a Large Language Model (From Scratch)
第2章:处理文本数据
见前序文章【AI系列】从零开始学习大模型GPT (1)- Build a Large Language Model (From Scratch)
第3章:编码Attention机制
见前序文章【AI系列】从零开始学习大模型GPT (1)- Build a Large Language Model (From Scratch)
第4章:从零实现GPT模型
基础架构
多头注意力模块
python
import torch
import torch.nn as nn
class MultiHeadAttention(nn.Module):
def __init__(self, d_model, num_heads):
super().__init__()
self.d_model = d_model
self.num_heads = num_heads
self.head_dim = d_model // num_heads
self.wq = nn.Linear(d_model, d_model)
self.wk = nn.Linear(d_model, d_model)
self.wv = nn.Linear(d_model, d_model)
self.dense = nn.Linear(d_model, d_model)
def forward(self, q, k, v, mask=None):
batch_size = q.size(0)
# 线性投影并分头
q = self.wq(q).view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2)
k = self.wk(k).view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2)
v = self.wv(v).view(batch_size, -1, self.num_heads, self.head_dim).transpose(1, 2)
# 缩放点积注意力
scores = torch.matmul(q, k.transpose(-2, -1)) / torch.sqrt(torch.tensor(self.head_dim, dtype=torch.float32))
if mask is not None:
scores += mask * -1e9
attention = torch.softmax(scores, dim=-1)
output = torch.matmul(attention, v)
# 合并多头
output = output.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model)
return self.dense(output)
前馈网络模块
python
class FeedForward(nn.Module):
def __init__(self, d_model, d_ff):
super().__init__()
self.linear1 = nn.Linear(d_model, d_ff)
self.linear2 = nn.Linear(d_ff, d_model)
self.relu = nn.ReLU()
def forward(self, x):
return self.linear2(self.relu(self.linear1(x)))
Transformer解码层
python
class DecoderLayer(nn.Module):
def __init__(self, d_model, num_heads, d_ff, dropout):
super().__init__()
self.mha = MultiHeadAttention(d_model, num_heads)
self.ffn = FeedForward(d_model, d_ff)
self.layernorm1 = nn.LayerNorm(d_model)
self.layernorm2 = nn.LayerNorm(d_model)
self.dropout = nn.Dropout(dropout)
def forward(self, x, mask):
attn_output = self.mha(x, x, x, mask)
x = self.layernorm1(x + self.dropout(attn_output))
ffn_output = self.ffn(x)
return self.layernorm2(x + self.dropout(ffn_output))
位置编码
python
class PositionalEncoding(nn.Module):
def __init__(self, d_model, max_len=512):
super().__init__()
position = torch.arange(max_len).unsqueeze(1)
div_term = torch.exp(torch.arange(0, d_model, 2) * (-math.log(10000.0) / d_model))
pe = torch.zeros(max_len, d_model)
pe[:, 0::2] = torch.sin(position * div_term)
pe[:, 1::2] = torch.cos(position * div_term)
self.register_buffer('pe', pe)
def forward(self, x):
return x + self.pe[:x.size(1)]
以上代码展示了GPT核心组件的实现逻辑。实际完整实现还需包含:
- 词嵌入层
- 多层解码器堆叠
- 自回归掩码生成
- 输出投影层
完整开源实现可参考HuggingFace的transformers库或OpenAI官方发布的代码。
代码验证
第5章:在未标记数据上进行预训练
关注我,待更新
第6章:用于文本分类的微调
关注我,待更新
第7章:为指令执行进行微调
关注我,待更新
参考
代码汇总 github