大模型中哪些模型用到的pre-norm和post-norm技术的?

我整理好的1000+面试题,请看
大模型面试题总结-CSDN博客

或者

https://gitee.com/lilitom/ai_interview_questions/blob/master/README.md

最好将URL复制到浏览器中打开,不然可能无法直接打开


好了,我们今天针对上面的问题,

大模型中哪些模型用到的pre-norm和post-norm技术的?

下面的代码都来自transformers库,代码截取

  • LLAMA

    代码如下,请看核心注释的地方

    class LlamaDecoderLayer(nn.Module):
    def init(self, config: LlamaConfig, layer_idx: int):
    super().init()
    self.hidden_size = config.hidden_size

    复制代码
          self.self_attn = **
    
          self.mlp = LlamaMLP(config)
          self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
          self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps)
    
      def forward(
          self,
          hidden_states: torch.Tensor,
          ***
      ) -> **:
          residual = hidden_states
          # 输入先norm
          hidden_states = self.input_layernorm(hidden_states)
          # 计算attention
          hidden_states, self_attn_weights, present_key_value = self.self_attn(...        )
          hidden_states = residual + hidden_states
    
          # 先norm后mlp然后加上residual
          residual = hidden_states
          hidden_states = self.post_attention_layernorm(hidden_states)
          hidden_states = self.mlp(hidden_states)
          hidden_states = residual + hidden_states
    
          outputs = (hidden_states,)
    
          if output_attentions:
              outputs += (self_attn_weights,)
    
          if use_cache:
              outputs += (present_key_value,)
    
          return outputs

pre-norm

  • Qwen

    代码如下,请看核心注释额地方

    class Qwen2DecoderLayer(nn.Module):
    def init(self, config: Qwen2Config, layer_idx: int):
    super().init()
    self.hidden_size = config.hidden_size

    复制代码
          self.self_attn = QWEN2_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx)
    
          self.mlp = Qwen2MLP(config)
          self.input_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
          self.post_attention_layernorm = Qwen2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
    
      def forward(
          self,
          hidden_states: torch.Tensor,
          **
      ) -> **:
          """
    
          residual = hidden_states
    
          hidden_states = self.input_layernorm(hidden_states)
    
          # Self Attention
          hidden_states, self_attn_weights, present_key_value = self.self_attn(xx)
          hidden_states = residual + hidden_states
    
          # Fully Connected
          residual = hidden_states
          hidden_states = self.post_attention_layernorm(hidden_states)
          hidden_states = self.mlp(hidden_states)
          hidden_states = residual + hidden_states
    
          outputs = (hidden_states,)
    
          if output_attentions:
              outputs += (self_attn_weights,)
    
          if use_cache:
              outputs += (present_key_value,)
    
          return outputs

和llama一样的,是pre-norm.

  • Bert

    代码如下,请看核心注释额地方

    class BertSelfOutput(nn.Module):
    def init(self, config):
    super().init()
    self.dense = nn.Linear(config.hidden_size, config.hidden_size)
    self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
    self.dropout = nn.Dropout(config.hidden_dropout_prob)

    复制代码
      def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor:
          hidden_states = self.dense(hidden_states)
          hidden_states = self.dropout(hidden_states)
          hidden_states = self.LayerNorm(hidden_states + input_tensor)
          return hidden_states

    class BertOutput(nn.Module):
    def init(self, config):
    super().init()
    self.dense = nn.Linear(config.intermediate_size, config.hidden_size)
    self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps)
    self.dropout = nn.Dropout(config.hidden_dropout_prob)

    复制代码
      def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor) -> torch.Tensor:
          hidden_states = self.dense(hidden_states)
          hidden_states = self.dropout(hidden_states)
          hidden_states = self.LayerNorm(hidden_states + input_tensor)
          return hidden_states

上面的output代码用在了计算attention那一块,在attention计算完之后和input加起来,再过LN层。 因此是Post-Norm.

相关推荐
一只大侠的侠2 小时前
拖拽式AI应用工厂:ModelEngine应用编排深度体验,智能表单与插件开发实战
人工智能
说私域2 小时前
基于AI智能名片链动2+1模式S2B2C商城小程序的流量运营策略研究
人工智能·微信·小程序·产品运营·流量运营
山后太阳2 小时前
如何评估TensorRT加速效果?
人工智能
2501_941333102 小时前
YOLO11-BiFPN实现:小麦杂质检测与分类系统详解_1
人工智能·分类·数据挖掘
Mixtral2 小时前
2026年面试记录转写工具深度测评:3款工具准确率与效率对比
人工智能·面试·职场和发展·语音识别·语音转文字
STLearner2 小时前
AAAI 2026 | 时间序列(Time Series) 论文总结[下] (分类,异常检测,基础模型,表示学习,生成)
大数据·论文阅读·人工智能·python·深度学习·机器学习·数据挖掘
陈天伟教授2 小时前
人工智能应用-机器视觉:绘画大师 02.深度神经网络中的内容与风格
人工智能·神经网络·dnn
l1t2 小时前
DeepSeek总结的SQLite 数据库的版本更新历史摘要
数据库·人工智能·sqlite
晓风残月淡2 小时前
AI生成视频变现思路总结
大数据·人工智能·音视频