Mindspore 公开课 - gpt2

GPT-2 Masked Self-Attention

GPT-2 Self-attention: 1- Creating queries, keys, and values
python 复制代码
batch_size = 1
seq_len = 10
embed_dim = 768

x = Tensor(np.random.randn(batch_size, seq_len, embed_dim), mindspore.float32)

from mindnlp._legacy.functional import split
from mindnlp.models.utils.utils import Conv1D

c_attn = Conv1D(3 * embed_dim, embed_dim)
query, key, value = split(c_attn(x), embed_dim, axis=2)
query.shape, key.shape, value.shape

def split_heads(tensor, num_heads, attn_head_size):
    """
    Splits hidden_size dim into attn_head_size and num_heads
    """
    new_shape = tensor.shape[:-1] + (num_heads, attn_head_size)
    tensor = tensor.view(new_shape)
    return ops.transpose(tensor, (0, 2, 1, 3))  # (batch, head, seq_length, head_features)

num_heads = 12
head_dim = embed_dim // num_heads

query = split_heads(query, num_heads, head_dim)
key = split_heads(key, num_heads, head_dim)
value = split_heads(value, num_heads, head_dim)

query.shape, key.shape, value.shape
GPT-2 Self-attention: 2- Scoring
python 复制代码
attn_weights = ops.matmul(query, key.swapaxes(-1, -2))

attn_weights.shape

max_positions = seq_len

bias = Tensor(np.tril(np.ones((max_positions, max_positions))).reshape(
              (1, 1, max_positions, max_positions)), mindspore.bool_)
bias
python 复制代码
from mindnlp._legacy.functional import where, softmax

attn_weights = attn_weights / ops.sqrt(ops.scalar_to_tensor(value.shape[-1]))
query_length, key_length = query.shape[-2], key.shape[-2]
causal_mask = bias[:, :, key_length - query_length: key_length, :key_length].bool()
mask_value = Tensor(np.finfo(np.float32).min, dtype=attn_weights.dtype)
attn_weights = where(causal_mask, attn_weights, mask_value)

np.finfo(np.float32).min

attn_weights[0, 0]


attn_weights = softmax(attn_weights, axis=-1)
attn_weights.shape

attn_weights[0, 0]

attn_output = ops.matmul(attn_weights, value)

attn_output.shape
GPT-2 Self-attention: 3.5- Merge attention heads
python 复制代码
def merge_heads(tensor, num_heads, attn_head_size):
    """
    Merges attn_head_size dim and num_attn_heads dim into hidden_size
    """
    tensor = ops.transpose(tensor, (0, 2, 1, 3))
    new_shape = tensor.shape[:-2] + (num_heads * attn_head_size,)
    return tensor.view(new_shape)

attn_output = merge_heads(attn_output, num_heads, head_dim)

attn_output.shape
GPT-2 Self-attention: 4- Projecting
python 复制代码
c_proj = Conv1D(embed_dim, embed_dim)
attn_output = c_proj(attn_output)
attn_output.shape
相关推荐
余俊晖29 分钟前
3秒实现语音克隆的Qwen3-TTS的Qwen-TTS-Tokenizer和方法架构概览
人工智能·语音识别
森屿~~29 分钟前
AI 手势识别系统:踩坑与实现全记录 (PyTorch + MediaPipe)
人工智能·pytorch·python
运维行者_1 小时前
2026 技术升级,OpManager 新增 AI 网络拓扑与带宽预测功能
运维·网络·数据库·人工智能·安全·web安全·自动化
淬炼之火1 小时前
图文跨模态融合基础:大语言模型(LLM)
人工智能·语言模型·自然语言处理
Elastic 中国社区官方博客1 小时前
Elasticsearch:上下文工程 vs. 提示词工程
大数据·数据库·人工智能·elasticsearch·搜索引擎·ai·全文检索
正宗咸豆花1 小时前
LangGraph实战:构建可自愈的多智能体客服系统架构
人工智能·系统架构·claude
檐下翻书1732 小时前
文本创作进化:从辅助写作到内容策划的全面赋能
人工智能
仙人掌_lz2 小时前
AI代理记忆设计指南:从单一特征到完整系统,打造可靠智能体
人工智能
昨日之日20062 小时前
Qwen3-TTS - 一句话指挥AI配音 自由定制你的专属声音 十种语言随心说 支持50系显卡 一键整合包下载
人工智能
创客匠人老蒋2 小时前
AI赋能创始人表达:从个人智慧到组织能力的战略跃迁
人工智能·创始人ip·创客匠人