【保姆级教程】从零实现模块化Transformer对话生成模型(PyTorch完整代码)

【保姆级教程】从零实现模块化Transformer对话生成模型(PyTorch完整代码)

引言

Transformer作为NLP领域的里程碑模型,其核心的自注意力机制彻底改变了序列建模的方式。本文将通过一个对话生成模型的完整案例,从零拆解Transformer的模块化实现过程,包括Embedding、位置编码、多头注意力、编码器/解码器层等核心组件,并基于PyTorch完成从数据处理、分词器训练到模型训练的全流程。

一、代码整体架构概览

整个代码实现了一个端到端的对话生成模型,核心分为6个部分:

  1. 配置参数定义:模型、训练、数据相关的所有超参数
  2. Transformer核心模块实现:纯手工打造的模块化Transformer
  3. 分词器训练:基于HuggingFace Tokenizers训练WordPiece分词器
  4. 对话数据集构建:处理问答对数据,适配模型输入格式
  5. 辅助函数:文本清洗等工具函数
  6. 模型训练流程:完整的训练、验证、推理逻辑

二、逐模块详细解读

2.1 配置参数定义

这部分是整个项目的"控制面板",集中管理所有可配置参数,便于后续调优和维护。

python 复制代码
# ===================== 1. 所有配置参数 =====================
DATA_DIR = "./dialogue_data"          # 数据存放目录
MODEL_SAVE_DIR = "./dialogue_model_save"  # 模型保存目录

# 模型核心参数
VOCAB_SIZE = 10000                    # 词汇表大小
PADDING_ID = 1                        # PAD token的ID
CLS_ID = 3                            # CLS token的ID
SEP_ID = 4                            # SEP token的ID
MAX_INPUT_LEN = 50                    # 输入最大长度
MAX_OUTPUT_LEN = 50                   # 输出最大长度

# 新模型架构参数
D_MODEL = 256                         # 隐藏层维度 (必须能被 NHEAD 整除)
NHEAD = 8                             # 注意力头数 (256 / 8 = 32)
NUM_ENCODER_LAYERS = 3                # 编码器层数
NUM_DECODER_LAYERS = 3                # 解码器层数
DIM_FEEDFORWARD = 512                 # 前馈网络维度
DROPOUT = 0.1                         # Dropout概率
MAX_POS_LEN = 512                     # 位置编码最大长度

# 训练参数
BATCH_SIZE = 32                       # 批次大小
EPOCHS = 200                          # 训练轮数
LEARNING_RATE = 1e-4                  # 学习率

DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")  # 设备选择

关键说明

  • D_MODEL 必须能被 NHEAD 整除,因为要将隐藏层维度均分给每个注意力头
  • PADDING_ID/CLS_ID/SEP_ID 对应分词器的特殊token ID,需保持一致
  • MAX_POS_LEN 决定了位置编码的最大长度,不能小于输入/输出的最大长度

2.2 模块化Transformer核心实现

这是整个代码的核心,我们纯手工实现Transformer的每个核心组件,而不是直接使用nn.Transformer

2.2.1 词嵌入层(WordEmbedding)
python 复制代码
class WordEmbedding(nn.Module):
    def __init__(self, vocab_size, embed_dim, padding_idx):
        super(WordEmbedding, self).__init__()
        # 初始化嵌入层,padding_idx对应的向量会被置零且不参与训练
        self.embedding = nn.Embedding(vocab_size, embed_dim, padding_idx=padding_idx)
        self.scale = math.sqrt(embed_dim)  # 缩放因子,稳定训练

    def forward(self, x):
        # x: [batch_size, seq_len]
        # 返回: [batch_size, seq_len, embed_dim]
        return self.embedding(x) * self.scale

核心作用

  • 将离散的token ID转换为连续的向量表示
  • 乘以√d_model是Transformer论文中的做法,平衡嵌入向量的尺度
  • padding_idx确保PAD token的嵌入向量始终为0,不参与梯度更新
2.2.2 位置编码(PositionalEncoding)

Transformer本身没有时序信息,需要通过位置编码注入序列位置信息。

python 复制代码
class PositionalEncoding(nn.Module):
    def __init__(self, dim, max_len=5000, dropout=0.1):
        super(PositionalEncoding, self).__init__()
        self.dropout = nn.Dropout(p=dropout)
        
        # 初始化位置编码矩阵 [max_len, dim]
        pe = torch.zeros(max_len, dim)
        # 生成位置索引 [max_len, 1]
        position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
        # 计算分母项:1/(10000^(2i/d_model))
        div_term = torch.exp(torch.arange(0, dim, 2).float() * (-math.log(10000.0) / dim))
        
        # 偶数位置用sin,奇数位置用cos
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        
        # 增加batch维度 [1, max_len, dim],方便广播
        pe = pe.unsqueeze(0)
        # 注册为缓冲区,不参与参数更新
        self.register_buffer('pe', pe)

    def forward(self, x):
        # x: [batch, seq_len, dim]
        # 只取对应长度的位置编码,加到嵌入向量上
        x = x + self.pe[:, :x.size(1), :]
        return self.dropout(x)

核心原理

  • 使用正弦/余弦函数编码位置信息,能处理任意长度的序列
  • register_buffer确保pe矩阵不会被优化器更新
  • Dropout层增加随机性,防止过拟合
2.2.3 多头注意力(MultiHeadAttention)

这是Transformer的核心组件,实现并行的多维度注意力计算。

python 复制代码
class MultiHeadAttention(nn.Module):
    def __init__(self, dim, n_head, dropout=0.1):
        super(MultiHeadAttention, self).__init__()
        # 校验维度是否可分
        assert dim % n_head == 0, "dim 必须能被 n_head 整除"
        self.n_head = n_head  # 注意力头数
        self.dim = dim        # 总维度
        self.d_k = dim // n_head  # 每个头的维度
        
        # 定义Q/K/V的线性投影层
        self.wq = nn.Linear(dim, dim)
        self.wk = nn.Linear(dim, dim)
        self.wv = nn.Linear(dim, dim)
        self.fc_out = nn.Linear(dim, dim)  # 输出投影层
        
        self.dropout = nn.Dropout(dropout)
        self.softmax = nn.Softmax(dim=-1)

    def split_heads(self, x):
        """将维度拆分为多个注意力头
        x: [batch_size, seq_len, dim]
        返回: [batch_size, n_head, seq_len, d_k]
        """
        batch_size, seq_len, _ = x.size()
        return x.view(batch_size, seq_len, self.n_head, self.d_k).transpose(1, 2)

    def forward(self, q, k, v, mask=None):
        batch_size = q.size(0)
        
        # 1. 线性投影并拆分注意力头
        q = self.split_heads(self.wq(q))  # [batch, n_head, seq_len_q, d_k]
        k = self.split_heads(self.wk(k))  # [batch, n_head, seq_len_k, d_k]
        v = self.split_heads(self.wv(v))  # [batch, n_head, seq_len_v, d_k]
        
        # 2. 计算注意力分数: Q * K^T / √d_k
        # scores: [batch, n_head, seq_len_q, seq_len_k]
        scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k)
        
        # 3. 应用mask(填充mask或未来信息mask)
        if mask is not None:
            scores = scores.masked_fill(mask == 1, -1e9)
        
        # 4. 计算注意力权重
        attn = self.softmax(scores)
        attn = self.dropout(attn)
        
        # 5. 加权求和得到上下文向量
        # context: [batch, n_head, seq_len_q, d_k]
        context = torch.matmul(attn, v)
        
        # 6. 合并注意力头并投影
        context = context.transpose(1, 2).contiguous()  # [batch, seq_len_q, n_head, d_k]
        context = context.view(batch_size, -1, self.dim)  # [batch, seq_len_q, dim]
        
        return self.fc_out(context)  # [batch, seq_len_q, dim]

核心流程

  1. 投影:将Q/K/V分别通过线性层映射到相同维度
  2. 分拆:将维度拆分为多个注意力头,实现多维度并行计算
  3. 打分:计算Q和K的相似度得分
  4. 掩码:屏蔽填充位置或未来位置的信息
  5. 加权:用注意力权重对V进行加权求和
  6. 合并:将多个头的结果合并并投影
2.2.4 前馈网络(PositionwiseFeedForward)

每个位置的向量经过独立的两层线性变换,增加模型的非线性表达能力。

python 复制代码
class PositionwiseFeedForward(nn.Module):
    def __init__(self, dim, d_ff, dropout=0.1):
        super(PositionwiseFeedForward, self).__init__()
        self.linear1 = nn.Linear(dim, d_ff)  # 升维
        self.linear2 = nn.Linear(d_ff, dim)  # 降维
        self.dropout = nn.Dropout(dropout)
        self.relu = nn.ReLU()

    def forward(self, x):
        # x: [batch, seq_len, dim]
        # 升维 -> ReLU -> Dropout -> 降维
        return self.linear2(self.dropout(self.relu(self.linear1(x))))

核心设计

  • 先升维(dim→d_ff)再降维(d_ff→dim),增加模型容量
  • 每个位置的向量独立处理,因此称为"Positionwise"
2.2.5 编码器层(EncoderLayer)

单个编码器层由"多头自注意力 + 残差连接 + 层归一化 + 前馈网络 + 残差连接 + 层归一化"组成。

python 复制代码
class EncoderLayer(nn.Module):
    def __init__(self, dim, n_head, d_ff, dropout):
        super(EncoderLayer, self).__init__()
        self.self_attn = MultiHeadAttention(dim, n_head, dropout)
        self.feed_forward = PositionwiseFeedForward(dim, d_ff, dropout)
        self.norm1 = nn.LayerNorm(dim)
        self.norm2 = nn.LayerNorm(dim)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x, mask):
        # 自注意力 + 残差连接 + 层归一化
        attn_out = self.self_attn(x, x, x, mask)  # Q=K=V=x
        x = self.norm1(x + self.dropout(attn_out))
        
        # 前馈网络 + 残差连接 + 层归一化
        ff_out = self.feed_forward(x)
        return self.norm2(x + self.dropout(ff_out))

核心特点

  • 采用"预归一化"方式(先归一化再计算),比Post-Norm更稳定
  • 残差连接防止梯度消失,层归一化加速训练
  • 自注意力中Q=K=V,因为是编码自身的信息
2.2.6 解码器层(DecoderLayer)

解码器层比编码器层多了一个交叉注意力层,用于关注编码器的输出。

python 复制代码
class DecoderLayer(nn.Module):
    def __init__(self, dim, n_head, d_ff, dropout):
        super(DecoderLayer, self).__init__()
        self.self_attn = MultiHeadAttention(dim, n_head, dropout)  # 掩码自注意力
        self.cross_attn = MultiHeadAttention(dim, n_head, dropout) # 交叉注意力
        self.feed_forward = PositionwiseFeedForward(dim, d_ff, dropout)
        
        self.norm1 = nn.LayerNorm(dim)
        self.norm2 = nn.LayerNorm(dim)
        self.norm3 = nn.LayerNorm(dim)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x, enc_out, tgt_mask, src_mask):
        # 1. 掩码自注意力(防止看到未来信息)
        attn1 = self.self_attn(x, x, x, mask=tgt_mask)
        x = self.norm1(x + self.dropout(attn1))
        
        # 2. 交叉注意力(关注编码器输出)
        # Q=x, K=enc_out, V=enc_out
        attn2 = self.cross_attn(x, enc_out, enc_out, mask=src_mask)
        x = self.norm2(x + self.dropout(attn2))
        
        # 3. 前馈网络
        ff_out = self.feed_forward(x)
        return self.norm3(x + self.dropout(ff_out))

核心区别

  • 第一层是掩码自注意力:防止解码器看到未来的token
  • 第二层是交叉注意力:Q来自解码器,K/V来自编码器,实现对输入序列的关注
2.2.7 完整Transformer模型(ModularTransformer)

将所有组件组合成完整的编码器-解码器架构。

python 复制代码
class ModularTransformer(nn.Module):
    def __init__(self, vocab_size, max_len, dim, n_head, d_ff, 
                 n_enc_layers, n_dec_layers, dropout, padding_idx):
        super(ModularTransformer, self).__init__()
        
        # 嵌入和位置编码
        self.word_embedding = WordEmbedding(vocab_size, dim, padding_idx)
        self.pos_encoding = PositionalEncoding(dim, max_len, dropout)
        
        # 编码器栈:多个编码器层堆叠
        self.encoder_stack = nn.ModuleList([
            EncoderLayer(dim, n_head, d_ff, dropout)
            for _ in range(n_enc_layers)
        ])
        
        # 解码器栈:多个解码器层堆叠
        self.decoder_stack = nn.ModuleList([
            DecoderLayer(dim, n_head, d_ff, dropout)
            for _ in range(n_dec_layers)
        ])
        
        # 输出投影层:将隐藏层映射到词汇表大小
        self.fc_out = nn.Linear(dim, vocab_size)
        self._init_weights()  # 参数初始化

    def _init_weights(self):
        """参数初始化:Xavier均匀初始化"""
        for p in self.parameters():
            if p.dim() > 1:
                nn.init.xavier_uniform_(p)

    def generate_square_subsequent_mask(self, sz):
        """生成上三角掩码,用于解码器的掩码自注意力"""
        mask = torch.triu(torch.ones(sz, sz), diagonal=1).bool()
        return mask

    def create_masks(self, src, trg, padding_idx):
        """创建所有需要的mask:
        - src_pad_mask: 源序列的padding mask
        - tgt_mask: 目标序列的组合mask(padding + 未来信息)
        - src_key_mask: 交叉注意力的源序列mask
        """
        device = src.device
        # Source Padding Mask: [batch, 1, 1, src_len]
        src_pad_mask = (src == padding_idx).unsqueeze(1).unsqueeze(2)
        
        # Target Padding Mask: [batch, 1, 1, tgt_len]
        tgt_pad_mask = (trg == padding_idx).unsqueeze(1).unsqueeze(2)
        
        # Causal Mask: [tgt_len, tgt_len]
        tgt_len = trg.size(1)
        causal_mask = self.generate_square_subsequent_mask(tgt_len).to(device)
        causal_mask = causal_mask.unsqueeze(0).unsqueeze(0)
        
        # Combine: [batch, 1, tgt_len, tgt_len]
        tgt_mask = causal_mask | tgt_pad_mask.expand(-1, -1, tgt_len, -1)
        
        # Source Key Mask for Cross Attn: [batch, 1, tgt_len, src_len]
        src_key_mask = src_pad_mask.expand(-1, -1, tgt_len, -1)
        
        return src_pad_mask, tgt_mask, src_key_mask

    def forward(self, src_ids, trg_ids):
        """前向传播:完整的编码-解码流程"""
        # 1. 创建所有mask
        src_mask, tgt_mask, src_key_mask = self.create_masks(src_ids, trg_ids, PADDING_ID)
        
        # 2. 编码器处理源序列
        src_emb = self.word_embedding(src_ids)  # 词嵌入
        src_emb = self.pos_encoding(src_emb)    # 位置编码
        enc_out = src_emb
        for layer in self.encoder_stack:        # 逐层编码
            enc_out = layer(enc_out, src_mask)
            
        # 3. 解码器处理目标序列
        tgt_emb = self.word_embedding(trg_ids)  # 词嵌入
        tgt_emb = self.pos_encoding(tgt_emb)    # 位置编码
        dec_out = tgt_emb
        for layer in self.decoder_stack:        # 逐层解码
            dec_out = layer(dec_out, enc_out, tgt_mask, src_key_mask)
            
        # 4. 输出投影到词汇表
        return self.fc_out(dec_out)
    
    @torch.no_grad()
    def generate(self, src, max_len=50):
        """推理模式生成:自回归生成目标序列"""
        self.eval()
        device = src.device
        batch_size = src.shape[0]
        
        # 初始化目标序列:[CLS] + [PAD]*max_len
        tgt = torch.ones((batch_size, max_len), dtype=torch.long, device=device) * PADDING_ID
        tgt[:, 0] = CLS_ID  # 第一个token是CLS
        
        # 自回归生成:逐token预测
        for i in range(1, max_len):
            # 只传入已生成的部分 [:i]
            output = self.forward(src, tgt[:, :i])
            # 取最后一个token的预测结果
            next_token = torch.argmax(output[:, -1, :], dim=-1)
            tgt[:, i] = next_token
            
            # 如果所有样本都生成了SEP,提前终止
            if (next_token == SEP_ID).all():
                break
        
        self.train()
        return tgt

核心流程

  1. 编码阶段:源序列经过嵌入→位置编码→多层编码器,得到上下文表示
  2. 解码阶段:目标序列经过嵌入→位置编码→多层解码器(结合编码器输出)
  3. 生成阶段:自回归方式逐token生成,直到达到最大长度或生成SEP token

2.3 分词器训练

使用HuggingFace Tokenizers库训练WordPiece分词器,比原生Python实现快得多。

python 复制代码
def train_tokenizer():
    os.makedirs(MODEL_SAVE_DIR, exist_ok=True)
    # 初始化WordPiece分词器
    tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))
    
    # 文本归一化:NFD归一化→小写→去除空白
    tokenizer.normalizer = normalizers.Sequence([
        normalizers.NFD(), normalizers.Lowercase(), normalizers.Strip()
    ])
    
    # 预分词:按Unicode脚本分割→按空白分割
    tokenizer.pre_tokenizer = pre_tokenizers.Sequence([
        pre_tokenizers.UnicodeScripts(), pre_tokenizers.WhitespaceSplit()
    ])
    
    # 定义特殊token
    special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
    # 训练器配置
    trainer = trainers.WordPieceTrainer(vocab_size=VOCAB_SIZE, special_tokens=special_tokens, min_frequency=2)
    
    # 查找数据文件
    data_files = []
    if os.path.exists(DATA_DIR):
        for root, _, files in os.walk(DATA_DIR):
            for file in files:
                if file.endswith(".txt"):
                    data_files.append(os.path.join(root, file))
    
    # 如果没有数据文件,创建模拟数据
    if not data_files:
        print(f"警告:在 {DATA_DIR} 未找到任何 .txt 文件。将创建模拟数据。")
        os.makedirs(DATA_DIR, exist_ok=True)
        dummy_path = os.path.join(DATA_DIR, "dummy_train.txt")
        with open(dummy_path, "w", encoding="utf-8") as f:
            for i in range(200): # 增加一点数据量
                f.write(f"你好\t你好,这是第{i}条回复\n")
                f.write(f"今天天气怎么样\t今天天气不错,适合出去玩\n")
                f.write(f"你叫什么名字\t我是一个人工智能助手\n")
                f.write(f"再见\t再见,祝你愉快\n")
        data_files = [dummy_path]

    print(f"开始训练分词器,数据文件数:{len(data_files)}")
    tokenizer.train(data_files, trainer=trainer)
    
    # 保存分词器
    tokenizer_path = os.path.join(MODEL_SAVE_DIR, "dialogue_tokenizer.json")
    tokenizer.save(tokenizer_path)
    print(f"分词器已保存至:{tokenizer_path}")
    return tokenizer

核心步骤

  1. 初始化WordPiece模型,设置未知token
  2. 配置文本归一化和预分词规则
  3. 准备训练数据(无数据时自动生成模拟数据)
  4. 训练分词器并保存

2.4 对话数据集

自定义Dataset类,处理问答对数据,转换为模型需要的格式。

python 复制代码
class DialogueDataset(Dataset):
    def __init__(self, data_path, tokenizer_path, max_input_len=MAX_INPUT_LEN, max_output_len=MAX_OUTPUT_LEN):
        self.max_input_len = max_input_len
        self.max_output_len = max_output_len
        self.tokenizer = Tokenizer.from_file(tokenizer_path)  # 加载分词器
        self.pad_id = PADDING_ID
        
        self.data = []
        if not os.path.exists(data_path):
            raise FileNotFoundError(f"数据文件不存在:{data_path}")
            
        with open(data_path, "r", encoding="utf-8") as f:
            lines = f.readlines()
        
        print(f"\n===== 加载对话数据 =====")
        for line in tqdm(lines, desc="处理数据"):
            line = line.strip()
            if not line or "\t" not in line:
                continue
            # 分割用户输入和机器人回复
            user_text, bot_text = line.split("\t", maxsplit=1)
            if len(user_text) == 0 or len(bot_text) == 0:
                continue
            
            # 编码为token ID
            user_ids = self.tokenizer.encode(user_text).ids
            bot_ids = self.tokenizer.encode(bot_text).ids
            
            # 截断到最大长度
            input_ids = user_ids[:max_input_len]
            # Target格式:[CLS] + 回复内容 + [SEP]
            target_ids = [CLS_ID] + bot_ids[:max_output_len-2] + [SEP_ID]
            
            self.data.append((input_ids, target_ids))
        
        print(f"有效对话样本数:{len(self.data)}")

    def __len__(self):
        return len(self.data)

    def __getitem__(self, index):
        input_ids, target_ids = self.data[index]
        # 填充到最大长度
        input_ids = self._pad_sequence(input_ids, self.max_input_len)
        target_ids = self._pad_sequence(target_ids, self.max_output_len)
        return np.array(input_ids, dtype=np.int64), np.array(target_ids, dtype=np.int64)
    
    def _pad_sequence(self, seq, max_len):
        """序列填充:不足补PAD,过长截断"""
        pad_len = max_len - len(seq)
        if pad_len > 0:
            seq += [self.pad_id] * pad_len
        else:
            seq = seq[:max_len]
        return seq

数据格式要求

  • 每行一个问答对,用\t分隔:用户输入\t机器人回复
  • 目标序列添加[CLS]开头和[SEP]结尾,符合Transformer的输入规范
  • 所有序列统一填充/截断到固定长度,便于批量处理

2.5 辅助函数

简单的文本清洗函数,去除特殊token。

python 复制代码
def clean_text(text):
    """清洗生成的文本,去除特殊token"""
    special_tokens = ["[PAD]", "[CLS]", "[SEP]", "[MASK]", "[UNK]"]
    for token in special_tokens:
        text = text.replace(token, "").strip()
    return text

2.6 模型训练流程

完整的训练循环,包括数据加载、模型初始化、训练、验证和保存。

python 复制代码
def train_model():
    os.makedirs(MODEL_SAVE_DIR, exist_ok=True)
    
    print("\n===== 模型配置 (Modular Transformer) =====")
    print(f"维度:{D_MODEL}, 头数:{NHEAD}, 层数:Enc={NUM_ENCODER_LAYERS}, Dec={NUM_DECODER_LAYERS}")
    print(f"前馈维度:{DIM_FEEDFORWARD}, Dropout: {DROPOUT}")
    
    tokenizer_path = os.path.join(MODEL_SAVE_DIR, "dialogue_tokenizer.json")
    
    # 寻找数据文件
    train_data_path = os.path.join(DATA_DIR, "train.txt")
    if not os.path.exists(train_data_path):
        files = [f for f in os.listdir(DATA_DIR) if f.endswith('.txt')]
        if files:
            train_data_path = os.path.join(DATA_DIR, files[0])
        else:
            raise FileNotFoundError("未找到任何训练数据文件")
    print(f"使用数据文件:{train_data_path}")

    # 加载数据集和数据加载器
    dataset = DialogueDataset(train_data_path, tokenizer_path)
    dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=0)
    
    print(f"\n===== 训练设备 =====")
    print(f"使用设备:{DEVICE}")
    
    # 实例化模型
    model = ModularTransformer(
        vocab_size=VOCAB_SIZE,
        max_len=MAX_POS_LEN,
        dim=D_MODEL,
        n_head=NHEAD,
        d_ff=DIM_FEEDFORWARD,
        n_enc_layers=NUM_ENCODER_LAYERS,
        n_dec_layers=NUM_DECODER_LAYERS,
        dropout=DROPOUT,
        padding_idx=PADDING_ID
    ).to(DEVICE)
    
    # 打印参数量
    total_params = sum(p.numel() for p in model.parameters())
    print(f"模型总参数量:{total_params:,} ({total_params/1e6:.2f} M)")

    # 优化器和损失函数
    optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
    criterion = torch.nn.CrossEntropyLoss(ignore_index=PADDING_ID)  # 忽略PAD token
    
    global_step = 0
    # 动态调整训练轮数:数据量小则少训练几轮,防止过拟合
    run_epochs = min(EPOCHS, 10) if len(dataset) < 100 else EPOCHS 
    print(f"计划运行 {run_epochs} 轮 (配置上限 {EPOCHS})")

    # 训练循环
    for epoch in range(run_epochs):
        model.train()
        pbar = tqdm(dataloader, desc=f"Epoch {epoch+1}/{run_epochs}")
        total_loss = 0.0
        
        for batch_idx, (src, tgt) in enumerate(pbar):
            global_step += 1
            src = src.to(DEVICE)
            tgt = tgt.to(DEVICE)
            
            optimizer.zero_grad()
            # 输入:tgt[:, :-1] (去掉最后一个token)
            # 目标:tgt[:, 1:] (去掉第一个token)
            # 这样模型学习根据前i个token预测第i+1个token
            output = model(src, tgt[:, :-1])
            
            # 展平计算损失
            output_flat = output.reshape(-1, VOCAB_SIZE)
            tgt_flat = tgt[:, 1:].reshape(-1)
            loss = criterion(output_flat, tgt_flat)
            
            # 反向传播和优化
            loss.backward()
            # 梯度裁剪:防止梯度爆炸
            torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
            optimizer.step()
            
            # 损失统计
            total_loss += loss.item()
            avg_loss = total_loss / (batch_idx + 1)
            
            pbar.set_postfix({"loss": f"{loss.item():.4f}", "avg": f"{avg_loss:.4f}"})
            
            # 每20步打印一次生成结果,监控训练效果
            if global_step % 20 == 0 or (epoch == 0 and batch_idx == 0):
                print(f"\n--- Step {global_step} (Epoch {epoch+1}) ---")
                # 推理生成回复
                pred_ids = model.generate(src, max_len=MAX_OUTPUT_LEN)
                
                sample_idx = 0
                # 解码并清洗文本
                src_text = clean_text(dataset.tokenizer.decode(src[sample_idx].cpu().numpy()))
                pred_raw = pred_ids[sample_idx].cpu().numpy()
                pred_text = clean_text(dataset.tokenizer.decode(pred_raw))
                
                real_raw = tgt[sample_idx, 1:].cpu().numpy()
                real_text = clean_text(dataset.tokenizer.decode(real_raw))
                
                print(f"输入:{src_text}")
                print(f"预测:{pred_text}")
                print(f"真实:{real_text}")
                print("-" * 30)
            
            # 每500步保存一次模型
            if global_step % 500 == 0:
                torch.save(model.state_dict(), os.path.join(MODEL_SAVE_DIR, f"model_step_{global_step}.pt"))
        
        # 每个epoch结束保存模型
        epoch_avg_loss = total_loss / len(dataloader)
        torch.save(model.state_dict(), os.path.join(MODEL_SAVE_DIR, f"model_epoch_{epoch+1}.pt"))
        print(f"\nEpoch {epoch+1} 完成,平均损失:{epoch_avg_loss:.4f}")

    # 保存最终模型
    torch.save(model.state_dict(), os.path.join(MODEL_SAVE_DIR, "model_final.pt"))
    print("\n===== 训练完成 =====")

# 主函数入口
if __name__ == "__main__":
    # 1. 训练分词器
    train_tokenizer()
    # 2. 训练模型
    train_model()

训练核心要点

  1. 输入输出处理 :解码器输入是tgt[:, :-1],目标是tgt[:, 1:],实现自回归训练
  2. 损失计算:使用CrossEntropyLoss,忽略PAD token的损失
  3. 梯度裁剪:防止梯度爆炸,稳定训练
  4. 模型保存:按步数和epoch保存,方便后续恢复训练
  5. 实时监控:每20步打印生成结果,直观查看训练效果

三、完整代码

python 复制代码
# -*- coding: utf-8 -*-
"""
对话生成模型训练代码(模块化 Transformer 集成版)
包含:独立定义的 Embedding, PosEnc, MHA, FFN, Encoder/Decoder Layers
"""
import os
import random
import numpy as np
import torch
import torch.nn as nn
import math
from tqdm import tqdm
from torch.utils.data import Dataset, DataLoader
import torch.optim as optim
from tokenizers import (
    models, normalizers, pre_tokenizers, trainers, Tokenizer
)

# ===================== 1. 所有配置参数 =====================
DATA_DIR = "./dialogue_data"
MODEL_SAVE_DIR = "./dialogue_model_save"

# 模型核心参数
VOCAB_SIZE = 10000
PADDING_ID = 1
CLS_ID = 3
SEP_ID = 4
MAX_INPUT_LEN = 50
MAX_OUTPUT_LEN = 50

# 新模型架构参数
D_MODEL = 256             # 隐藏层维度 (必须能被 NHEAD 整除)
NHEAD = 8                 # 注意力头数 (256 / 8 = 32)
NUM_ENCODER_LAYERS = 3
NUM_DECODER_LAYERS = 3
DIM_FEEDFORWARD = 512     # 前馈网络维度
DROPOUT = 0.1
MAX_POS_LEN = 512         # 位置编码最大长度

# 训练参数
BATCH_SIZE = 32
EPOCHS = 200
LEARNING_RATE = 1e-4

DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")

# ===================== 2. 模块化 Transformer 核心代码 =====================

class WordEmbedding(nn.Module):
    def __init__(self, vocab_size, embed_dim, padding_idx):
        super(WordEmbedding, self).__init__()
        self.embedding = nn.Embedding(vocab_size, embed_dim, padding_idx=padding_idx)
        self.scale = math.sqrt(embed_dim)

    def forward(self, x):
        return self.embedding(x) * self.scale

class PositionalEncoding(nn.Module):
    def __init__(self, dim, max_len=5000, dropout=0.1):
        super(PositionalEncoding, self).__init__()
        self.dropout = nn.Dropout(p=dropout)
        pe = torch.zeros(max_len, dim)
        position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1)
        div_term = torch.exp(torch.arange(0, dim, 2).float() * (-math.log(10000.0) / dim))
        pe[:, 0::2] = torch.sin(position * div_term)
        pe[:, 1::2] = torch.cos(position * div_term)
        pe = pe.unsqueeze(0)
        self.register_buffer('pe', pe)

    def forward(self, x):
        # x: [batch, seq_len, dim]
        x = x + self.pe[:, :x.size(1), :]
        return self.dropout(x)

class MultiHeadAttention(nn.Module):
    def __init__(self, dim, n_head, dropout=0.1):
        super(MultiHeadAttention, self).__init__()
        assert dim % n_head == 0, "dim 必须能被 n_head 整除"
        self.n_head = n_head
        self.dim = dim
        self.d_k = dim // n_head
        
        self.wq = nn.Linear(dim, dim)
        self.wk = nn.Linear(dim, dim)
        self.wv = nn.Linear(dim, dim)
        self.fc_out = nn.Linear(dim, dim)
        
        self.dropout = nn.Dropout(dropout)
        self.softmax = nn.Softmax(dim=-1)

    def split_heads(self, x):
        batch_size, seq_len, _ = x.size()
        return x.view(batch_size, seq_len, self.n_head, self.d_k).transpose(1, 2)

    def forward(self, q, k, v, mask=None):
        batch_size = q.size(0)
        q = self.split_heads(self.wq(q))
        k = self.split_heads(self.wk(k))
        v = self.split_heads(self.wv(v))
        
        scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k)
        
        if mask is not None:
            scores = scores.masked_fill(mask == 1, -1e9)
        
        attn = self.softmax(scores)
        attn = self.dropout(attn)
        context = torch.matmul(attn, v)
        
        context = context.transpose(1, 2).contiguous().view(batch_size, -1, self.dim)
        return self.fc_out(context)

class PositionwiseFeedForward(nn.Module):
    def __init__(self, dim, d_ff, dropout=0.1):
        super(PositionwiseFeedForward, self).__init__()
        self.linear1 = nn.Linear(dim, d_ff)
        self.linear2 = nn.Linear(d_ff, dim)
        self.dropout = nn.Dropout(dropout)
        self.relu = nn.ReLU()

    def forward(self, x):
        return self.linear2(self.dropout(self.relu(self.linear1(x))))

class EncoderLayer(nn.Module):
    def __init__(self, dim, n_head, d_ff, dropout):
        super(EncoderLayer, self).__init__()
        self.self_attn = MultiHeadAttention(dim, n_head, dropout)
        self.feed_forward = PositionwiseFeedForward(dim, d_ff, dropout)
        self.norm1 = nn.LayerNorm(dim)
        self.norm2 = nn.LayerNorm(dim)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x, mask):
        attn_out = self.self_attn(x, x, x, mask)
        x = self.norm1(x + self.dropout(attn_out))
        ff_out = self.feed_forward(x)
        return self.norm2(x + self.dropout(ff_out))

class DecoderLayer(nn.Module):
    def __init__(self, dim, n_head, d_ff, dropout):
        super(DecoderLayer, self).__init__()
        self.self_attn = MultiHeadAttention(dim, n_head, dropout)
        self.cross_attn = MultiHeadAttention(dim, n_head, dropout)
        self.feed_forward = PositionwiseFeedForward(dim, d_ff, dropout)
        
        self.norm1 = nn.LayerNorm(dim)
        self.norm2 = nn.LayerNorm(dim)
        self.norm3 = nn.LayerNorm(dim)
        self.dropout = nn.Dropout(dropout)

    def forward(self, x, enc_out, tgt_mask, src_mask):
        # Self Attention
        attn1 = self.self_attn(x, x, x, mask=tgt_mask)
        x = self.norm1(x + self.dropout(attn1))
        
        # Cross Attention
        attn2 = self.cross_attn(x, enc_out, enc_out, mask=src_mask)
        x = self.norm2(x + self.dropout(attn2))
        
        # FFN
        ff_out = self.feed_forward(x)
        return self.norm3(x + self.dropout(ff_out))

class ModularTransformer(nn.Module):
    def __init__(self, vocab_size, max_len, dim, n_head, d_ff, 
                 n_enc_layers, n_dec_layers, dropout, padding_idx):
        super(ModularTransformer, self).__init__()
        
        self.word_embedding = WordEmbedding(vocab_size, dim, padding_idx)
        self.pos_encoding = PositionalEncoding(dim, max_len, dropout)
        
        self.encoder_stack = nn.ModuleList([
            EncoderLayer(dim, n_head, d_ff, dropout)
            for _ in range(n_enc_layers)
        ])
        
        self.decoder_stack = nn.ModuleList([
            DecoderLayer(dim, n_head, d_ff, dropout)
            for _ in range(n_dec_layers)
        ])
        
        self.fc_out = nn.Linear(dim, vocab_size)
        self._init_weights()

    def _init_weights(self):
        for p in self.parameters():
            if p.dim() > 1:
                nn.init.xavier_uniform_(p)

    def generate_square_subsequent_mask(self, sz):
        mask = torch.triu(torch.ones(sz, sz), diagonal=1).bool()
        return mask

    def create_masks(self, src, trg, padding_idx):
        device = src.device
        # Source Padding Mask: [batch, 1, 1, src_len]
        src_pad_mask = (src == padding_idx).unsqueeze(1).unsqueeze(2)
        
        # Target Padding Mask: [batch, 1, 1, tgt_len]
        tgt_pad_mask = (trg == padding_idx).unsqueeze(1).unsqueeze(2)
        
        # Causal Mask: [tgt_len, tgt_len]
        tgt_len = trg.size(1)
        causal_mask = self.generate_square_subsequent_mask(tgt_len).to(device)
        causal_mask = causal_mask.unsqueeze(0).unsqueeze(0)
        
        # Combine: [batch, 1, tgt_len, tgt_len]
        tgt_mask = causal_mask | tgt_pad_mask.expand(-1, -1, tgt_len, -1)
        
        # Source Key Mask for Cross Attn: [batch, 1, tgt_len, src_len]
        src_key_mask = src_pad_mask.expand(-1, -1, tgt_len, -1)
        
        return src_pad_mask, tgt_mask, src_key_mask

    def forward(self, src_ids, trg_ids):
        src_mask, tgt_mask, src_key_mask = self.create_masks(src_ids, trg_ids, PADDING_ID)
        
        # Encoder
        src_emb = self.word_embedding(src_ids)
        src_emb = self.pos_encoding(src_emb)
        enc_out = src_emb
        for layer in self.encoder_stack:
            enc_out = layer(enc_out, src_mask)
            
        # Decoder
        tgt_emb = self.word_embedding(trg_ids)
        tgt_emb = self.pos_encoding(tgt_emb)
        dec_out = tgt_emb
        for layer in self.decoder_stack:
            dec_out = layer(dec_out, enc_out, tgt_mask, src_key_mask)
            
        return self.fc_out(dec_out)
    
    @torch.no_grad()
    def generate(self, src, max_len=50):
        """推理模式生成"""
        self.eval()
        device = src.device
        batch_size = src.shape[0]
        
        # 初始化: [CLS] + [PAD]...
        tgt = torch.ones((batch_size, max_len), dtype=torch.long, device=device) * PADDING_ID
        tgt[:, 0] = CLS_ID
        
        for i in range(1, max_len):
            # 只传入已生成的部分 [:i]
            output = self.forward(src, tgt[:, :i])
            next_token = torch.argmax(output[:, -1, :], dim=-1)
            tgt[:, i] = next_token
            
            if (next_token == SEP_ID).all():
                break
        
        self.train()
        return tgt

# ===================== 3. 分词器训练 =====================
def train_tokenizer():
    os.makedirs(MODEL_SAVE_DIR, exist_ok=True)
    tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))
    tokenizer.normalizer = normalizers.Sequence([
        normalizers.NFD(), normalizers.Lowercase(), normalizers.Strip()
    ])
    tokenizer.pre_tokenizer = pre_tokenizers.Sequence([
        pre_tokenizers.UnicodeScripts(), pre_tokenizers.WhitespaceSplit()
    ])
    
    special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
    trainer = trainers.WordPieceTrainer(vocab_size=VOCAB_SIZE, special_tokens=special_tokens, min_frequency=2)
    
    data_files = []
    if os.path.exists(DATA_DIR):
        for root, _, files in os.walk(DATA_DIR):
            for file in files:
                if file.endswith(".txt"):
                    data_files.append(os.path.join(root, file))
    
    if not data_files:
        print(f"警告:在 {DATA_DIR} 未找到任何 .txt 文件。将创建模拟数据。")
        os.makedirs(DATA_DIR, exist_ok=True)
        dummy_path = os.path.join(DATA_DIR, "dummy_train.txt")
        with open(dummy_path, "w", encoding="utf-8") as f:
            for i in range(200): # 增加一点数据量
                f.write(f"你好\t你好,这是第{i}条回复\n")
                f.write(f"今天天气怎么样\t今天天气不错,适合出去玩\n")
                f.write(f"你叫什么名字\t我是一个人工智能助手\n")
                f.write(f"再见\t再见,祝你愉快\n")
        data_files = [dummy_path]

    print(f"开始训练分词器,数据文件数:{len(data_files)}")
    tokenizer.train(data_files, trainer=trainer)
    
    tokenizer_path = os.path.join(MODEL_SAVE_DIR, "dialogue_tokenizer.json")
    tokenizer.save(tokenizer_path)
    print(f"分词器已保存至:{tokenizer_path}")
    return tokenizer

# ===================== 4. 对话数据集 =====================
class DialogueDataset(Dataset):
    def __init__(self, data_path, tokenizer_path, max_input_len=MAX_INPUT_LEN, max_output_len=MAX_OUTPUT_LEN):
        self.max_input_len = max_input_len
        self.max_output_len = max_output_len
        self.tokenizer = Tokenizer.from_file(tokenizer_path)
        self.pad_id = PADDING_ID
        
        self.data = []
        if not os.path.exists(data_path):
            raise FileNotFoundError(f"数据文件不存在:{data_path}")
            
        with open(data_path, "r", encoding="utf-8") as f:
            lines = f.readlines()
        
        print(f"\n===== 加载对话数据 =====")
        for line in tqdm(lines, desc="处理数据"):
            line = line.strip()
            if not line or "\t" not in line:
                continue
            user_text, bot_text = line.split("\t", maxsplit=1)
            if len(user_text) == 0 or len(bot_text) == 0:
                continue
            
            user_ids = self.tokenizer.encode(user_text).ids
            bot_ids = self.tokenizer.encode(bot_text).ids
            
            input_ids = user_ids[:max_input_len]
            # Target: [CLS] + bot_text + [SEP]
            target_ids = [CLS_ID] + bot_ids[:max_output_len-2] + [SEP_ID]
            
            self.data.append((input_ids, target_ids))
        
        print(f"有效对话样本数:{len(self.data)}")

    def __len__(self):
        return len(self.data)

    def __getitem__(self, index):
        input_ids, target_ids = self.data[index]
        input_ids = self._pad_sequence(input_ids, self.max_input_len)
        target_ids = self._pad_sequence(target_ids, self.max_output_len)
        return np.array(input_ids, dtype=np.int64), np.array(target_ids, dtype=np.int64)
    
    def _pad_sequence(self, seq, max_len):
        pad_len = max_len - len(seq)
        if pad_len > 0:
            seq += [self.pad_id] * pad_len
        else:
            seq = seq[:max_len]
        return seq

# ===================== 5. 辅助函数 =====================
def clean_text(text):
    special_tokens = ["[PAD]", "[CLS]", "[SEP]", "[MASK]", "[UNK]"]
    for token in special_tokens:
        text = text.replace(token, "").strip()
    return text

# ===================== 6. 模型训练 =====================
def train_model():
    os.makedirs(MODEL_SAVE_DIR, exist_ok=True)
    
    print("\n===== 模型配置 (Modular Transformer) =====")
    print(f"维度:{D_MODEL}, 头数:{NHEAD}, 层数:Enc={NUM_ENCODER_LAYERS}, Dec={NUM_DECODER_LAYERS}")
    print(f"前馈维度:{DIM_FEEDFORWARD}, Dropout: {DROPOUT}")
    
    tokenizer_path = os.path.join(MODEL_SAVE_DIR, "dialogue_tokenizer.json")
    
    # 寻找数据文件
    train_data_path = os.path.join(DATA_DIR, "train.txt")
    if not os.path.exists(train_data_path):
        files = [f for f in os.listdir(DATA_DIR) if f.endswith('.txt')]
        if files:
            train_data_path = os.path.join(DATA_DIR, files[0])
        else:
            raise FileNotFoundError("未找到任何训练数据文件")
    print(f"使用数据文件:{train_data_path}")

    dataset = DialogueDataset(train_data_path, tokenizer_path)
    dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True, num_workers=0)
    
    print(f"\n===== 训练设备 =====")
    print(f"使用设备:{DEVICE}")
    
    # === 实例化新模型 ===
    model = ModularTransformer(
        vocab_size=VOCAB_SIZE,
        max_len=MAX_POS_LEN,
        dim=D_MODEL,
        n_head=NHEAD,
        d_ff=DIM_FEEDFORWARD,
        n_enc_layers=NUM_ENCODER_LAYERS,
        n_dec_layers=NUM_DECODER_LAYERS,
        dropout=DROPOUT,
        padding_idx=PADDING_ID
    ).to(DEVICE)
    
    # 打印参数量
    total_params = sum(p.numel() for p in model.parameters())
    print(f"模型总参数量:{total_params:,} ({total_params/1e6:.2f} M)")

    optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE)
    criterion = torch.nn.CrossEntropyLoss(ignore_index=PADDING_ID)
    
    global_step = 0
    # 如果数据量小,限制 epoch 防止过拟合演示太久;数据量大则跑满 EPOCHS
    run_epochs = min(EPOCHS, 10) if len(dataset) < 100 else EPOCHS 
    print(f"计划运行 {run_epochs} 轮 (配置上限 {EPOCHS})")

    for epoch in range(run_epochs):
        model.train()
        pbar = tqdm(dataloader, desc=f"Epoch {epoch+1}/{run_epochs}")
        total_loss = 0.0
        
        for batch_idx, (src, tgt) in enumerate(pbar):
            global_step += 1
            src = src.to(DEVICE)
            tgt = tgt.to(DEVICE)
            
            optimizer.zero_grad()
            # 输入: tgt[:, :-1], 目标: tgt[:, 1:]
            output = model(src, tgt[:, :-1])
            
            output_flat = output.reshape(-1, VOCAB_SIZE)
            tgt_flat = tgt[:, 1:].reshape(-1)
            loss = criterion(output_flat, tgt_flat)
            
            loss.backward()
            # 梯度裁剪 (可选,防止梯度爆炸)
            torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
            optimizer.step()
            
            total_loss += loss.item()
            avg_loss = total_loss / (batch_idx + 1)
            
            pbar.set_postfix({"loss": f"{loss.item():.4f}", "avg": f"{avg_loss:.4f}"})
            
            # 每 20 步打印一次生成结果 (避免刷屏)
            if global_step % 20 == 0 or (epoch == 0 and batch_idx == 0):
                print(f"\n--- Step {global_step} (Epoch {epoch+1}) ---")
                # 使用模型进行推理生成
                pred_ids = model.generate(src, max_len=MAX_OUTPUT_LEN)
                
                sample_idx = 0
                src_text = clean_text(dataset.tokenizer.decode(src[sample_idx].cpu().numpy()))
                # 预测结果去掉 PAD,只取到 SEP 或末尾
                pred_raw = pred_ids[sample_idx].cpu().numpy()
                pred_text = clean_text(dataset.tokenizer.decode(pred_raw))
                
                real_raw = tgt[sample_idx, 1:].cpu().numpy() # 去掉开头的 CLS
                real_text = clean_text(dataset.tokenizer.decode(real_raw))
                
                print(f"输入:{src_text}")
                print(f"预测:{pred_text}")
                print(f"真实:{real_text}")
                print("-" * 30)
            
            if global_step % 500 == 0:
                torch.save(model.state_dict(), os.path.join(MODEL_SAVE_DIR, f"model_step_{global_step}.pt"))
        
        epoch_avg_loss = total_loss / len(dataloader)
        torch.save(model.state_dict(), os.path.join(MODEL_SAVE_DIR, f"model_epoch_{epoch+1}.pt"))
        print(f"\nEpoch {epoch+1} 完成,平均损失:{epoch_avg_loss:.4f}")

    torch.save(model.state_dict(), os.path.join(MODEL_SAVE_DIR, "model_final.pt"))
    print("\n===== 训练完成 =====")

if __name__ == "__main__":
    # 1. 训练分词器
    train_tokenizer()
    # 2. 训练模型
    train_model()

四、运行说明

4.1 环境依赖

bash 复制代码
pip install torch numpy tqdm tokenizers

4.2 数据准备

  • ./dialogue_data目录下创建train.txt文件
  • 每行格式:用户输入\t机器人回复(例如:你好\t你好呀,有什么可以帮助你的?
  • 如果没有数据,代码会自动生成模拟数据用于测试

4.3 运行代码

bash 复制代码
python dialogue_transformer.py

4.4 输出结果

  • 分词器会保存到./dialogue_model_save/dialogue_tokenizer.json
  • 模型会按步数和epoch保存到./dialogue_model_save/目录
  • 训练过程中会实时打印损失和生成示例

五、总结

核心知识点回顾

  1. Transformer模块化实现:手动实现了Embedding、位置编码、多头注意力、编码器/解码器层等核心组件,理解Transformer的底层原理
  2. 自回归训练:解码器输入为目标序列的前n-1个token,目标为后n-1个token,实现逐token预测
  3. Mask机制:使用padding mask屏蔽填充位置,使用causal mask防止解码器看到未来信息
  4. 完整训练流程:从分词器训练、数据处理到模型训练、推理生成,覆盖NLP模型开发全流程

关键优化点

  1. 预归一化:在残差连接前进行层归一化,提升训练稳定性
  2. 梯度裁剪:防止梯度爆炸,确保训练过程稳定
  3. 动态epoch调整:根据数据量自动调整训练轮数,避免过拟合
  4. 自回归生成:推理阶段逐token生成,直到遇到SEP或达到最大长度

通过这个案例,你不仅能掌握Transformer的核心实现,还能了解对话生成模型的完整开发流程,从数据处理到模型部署的各个环节。

复制代码
相关推荐
yiyu07162 小时前
3分钟搞懂深度学习AI:实操篇:ResNet
人工智能·深度学习
啊巴矲2 小时前
小白从零开始勇闯人工智能:bert自然语言框架(2)
人工智能·深度学习·bert
放下华子我只抽RuiKe52 小时前
AI大模型开发-实战精讲:从零构建 RFM 会员价值模型(进阶挑战版)
人工智能·深度学习·算法·机器学习·数据挖掘·数据分析·聚类
青春不败 177-3266-05203 小时前
最新AI-Python自然科学领域机器学习与深度学习技术——随机森林、XGBoost、CNN、LSTM、Transformer,从数据处理到时空建模等
人工智能·深度学习·机器学习·transformer·自然科学随机森林
小陈phd3 小时前
多模态大模型学习笔记(十三)——transformer学习之位置编码
人工智能·笔记·transformer
zh路西法4 小时前
【宇树机器人强化学习】(一):PPO算法的python实现与解析
python·深度学习·算法·机器学习·机器人
放下华子我只抽RuiKe54 小时前
机器学习全景指南-探索篇——发现数据内在结构的聚类算法
人工智能·深度学习·算法·机器学习·语言模型·数据挖掘·聚类
红茶川5 小时前
[ExecuTorch 系列] 2. 导出官方支持的大语言模型
人工智能·pytorch·ai·端侧ai
程序员JerrySUN5 小时前
别再把 HTTPS 和 OTA 看成两回事:一篇讲透 HTTPS 协议、安全通信机制与 Mender 升级加密链路的完整文章
android·java·开发语言·深度学习·流程图