Mindspore 公开课 - gpt2

GPT-2 Masked Self-Attention

GPT-2 Self-attention: 1- Creating queries, keys, and values
python 复制代码
batch_size = 1
seq_len = 10
embed_dim = 768

x = Tensor(np.random.randn(batch_size, seq_len, embed_dim), mindspore.float32)

from mindnlp._legacy.functional import split
from mindnlp.models.utils.utils import Conv1D

c_attn = Conv1D(3 * embed_dim, embed_dim)
query, key, value = split(c_attn(x), embed_dim, axis=2)
query.shape, key.shape, value.shape

def split_heads(tensor, num_heads, attn_head_size):
    """
    Splits hidden_size dim into attn_head_size and num_heads
    """
    new_shape = tensor.shape[:-1] + (num_heads, attn_head_size)
    tensor = tensor.view(new_shape)
    return ops.transpose(tensor, (0, 2, 1, 3))  # (batch, head, seq_length, head_features)

num_heads = 12
head_dim = embed_dim // num_heads

query = split_heads(query, num_heads, head_dim)
key = split_heads(key, num_heads, head_dim)
value = split_heads(value, num_heads, head_dim)

query.shape, key.shape, value.shape
GPT-2 Self-attention: 2- Scoring
python 复制代码
attn_weights = ops.matmul(query, key.swapaxes(-1, -2))

attn_weights.shape

max_positions = seq_len

bias = Tensor(np.tril(np.ones((max_positions, max_positions))).reshape(
              (1, 1, max_positions, max_positions)), mindspore.bool_)
bias
python 复制代码
from mindnlp._legacy.functional import where, softmax

attn_weights = attn_weights / ops.sqrt(ops.scalar_to_tensor(value.shape[-1]))
query_length, key_length = query.shape[-2], key.shape[-2]
causal_mask = bias[:, :, key_length - query_length: key_length, :key_length].bool()
mask_value = Tensor(np.finfo(np.float32).min, dtype=attn_weights.dtype)
attn_weights = where(causal_mask, attn_weights, mask_value)

np.finfo(np.float32).min

attn_weights[0, 0]


attn_weights = softmax(attn_weights, axis=-1)
attn_weights.shape

attn_weights[0, 0]

attn_output = ops.matmul(attn_weights, value)

attn_output.shape
GPT-2 Self-attention: 3.5- Merge attention heads
python 复制代码
def merge_heads(tensor, num_heads, attn_head_size):
    """
    Merges attn_head_size dim and num_attn_heads dim into hidden_size
    """
    tensor = ops.transpose(tensor, (0, 2, 1, 3))
    new_shape = tensor.shape[:-2] + (num_heads * attn_head_size,)
    return tensor.view(new_shape)

attn_output = merge_heads(attn_output, num_heads, head_dim)

attn_output.shape
GPT-2 Self-attention: 4- Projecting
python 复制代码
c_proj = Conv1D(embed_dim, embed_dim)
attn_output = c_proj(attn_output)
attn_output.shape
相关推荐
初学大模型1 分钟前
使用卷积神经网络(CNN)提取文字特征来辅助大语言模型生成文字
人工智能·机器人
咚咚王者2 分钟前
人工智能之数据分析 Matplotlib:第七章 项目实践
人工智能·数据分析·matplotlib
爱看科技9 分钟前
微美全息(NASDAQ:WIMI)双判别器架构:量子生成对抗网络训练的革命性跨越
人工智能·生成对抗网络·量子计算
ziwu15 分钟前
【花朵识别系统】Python+TensorFlow+Django+人工智能+深度学习+卷积神经网络算法
人工智能·深度学习·图像识别
Wise玩转AI17 分钟前
医院智能体系统实战:基于 autogen 0.7 + DeepSeek 的多阶段工程落地(一)项目总览
人工智能·chatgpt·ai智能体·autogen
杭州泽沃电子科技有限公司23 分钟前
煤化工合成环节的监测:智能系统如何保障核心装置安全稳定运行?
运维·人工智能·科技·智能监测·煤化工
努力进修23 分钟前
视界重塑:基于Rokid AI眼镜的沉浸式视力康复训练系统设计与实现
人工智能·医疗健康·rokidsdk·ar开发·视力康复
科普瑞传感仪器26 分钟前
从“盲插”到“智插”:六维力控制技术如何革新PCBA自动化装配?
运维·人工智能·科技·ai·机器人·自动化·无人机
世岩清上28 分钟前
世岩清上:人工智能+园林,科技赋能下的园林新生态
人工智能·科技
P-ShineBeam43 分钟前
知识图谱-数据科学图谱可扩展平台-KGLiDS
人工智能·自然语言处理·知识图谱