【Transformer系列(2)】Multi-head self-attention 多头自注意力

一、多头自注意力

多头自注意力机制与自注意力机制的区别在于,Q,K,V向量被分为了num_heads份。

实现流程

(1)根据num_heads参数将单头变成多头,获取多头注意力中的各个头的Q,K,V值

(2)Q叉乘K的转置,再使用softmax,获取attention

(3)attention叉乘V,得到输出

二、代码实现

(1)根据num_heads参数将单头变成多头,获取多头注意力中的各个头的Q,K,V值

cpp 复制代码
# 每个token(Q,K,V)的尺寸
values_length = 33
# 原始单头长度
hidden_size = 768
# 单头qkv
# [33,768]
Query = np.random.rand(values_length, hidden_size)
Key = np.random.rand(values_length, hidden_size)
Value = np.random.rand(values_length, hidden_size)

# 单头 -> 分组为8个头
# [33,768] -> [33,8,96]
# 8个头
num_attention_heads = 8
# 原始单头拆分为多头后,我们单头的长度
attention_head_size = hidden_size // num_attention_heads
Query = np.reshape(Query, [values_length, num_attention_heads, attention_head_size])
Key = np.reshape(Key, [values_length, num_attention_heads, attention_head_size])
Value = np.reshape(Value, [values_length, num_attention_heads, attention_head_size])

# [33,8,96] -> [8,33,96] 头放最前面 M,H*W,C
Query = np.transpose(Query, [1, 0, 2])
Key = np.transpose(Key, [1, 0, 2])
Value = np.transpose(Value, [1, 0, 2])

(2)Q叉乘K的转置,再使用softmax,获取attention

cpp 复制代码
# qv -> attention
# [8,33,96] @ [8,96,33] -> [8,33,33] [m1,n] @ [n,m2] -> [m1,m2]
scores = Query @ np.transpose(Key, [0, 2, 1])
print(np.shape(scores))
# qv+softmax -> attention
scores = soft_max(scores)
print(np.shape(scores))

(3)attention叉乘V,得到输出

cpp 复制代码
# attention+v -> output
# [8,33,33] @ [8,33,96] -> [8,33,96] [m1,n] @ [n,m2] -> [m1,m2]
out = scores @ Value
print(np.shape(out))
# [8,33,96] -> [33,8,96]
out = np.transpose(out, [1, 0, 2])
print(np.shape(out))
# [33,8,96] -> [33,768]
out = np.reshape(out, [values_length , 768])
print(np.shape(out))

三、完整代码

cpp 复制代码
# multi-head self-attention #
# by liushuai #
# 2024/2/6 #

import numpy as np

def soft_max(z):
    t = np.exp(z)
    a = np.exp(z) / np.expand_dims(np.sum(t, axis=-1), -1)
    return a

# 每个token(Q,K,V)的尺寸
# 相当于H*W
values_length = 33
# 原始单头深度
# 相当于Channels
hidden_size = 768
# 单头qkv
# [33,768]
Query = np.random.rand(values_length, hidden_size)
Key = np.random.rand(values_length, hidden_size)
Value = np.random.rand(values_length, hidden_size)

# 单头 -> 分组为8个头
# [33,768] -> [33,8,96]
# 8个头
num_attention_heads = 8
# 原始单头拆分为多头后,我们单头的深度
attention_head_size = hidden_size // num_attention_heads
Query = np.reshape(Query, [values_length, num_attention_heads, attention_head_size])
Key = np.reshape(Key, [values_length, num_attention_heads, attention_head_size])
Value = np.reshape(Value, [values_length, num_attention_heads, attention_head_size])

# [33,8,96] -> [8,33,96] 头放最前面 M,H*W,C
Query = np.transpose(Query, [1, 0, 2])
Key = np.transpose(Key, [1, 0, 2])
Value = np.transpose(Value, [1, 0, 2])

# qv -> attention
# [8,33,96] @ [8,96,33] -> [8,33,33] [m1,n] @ [n,m2] -> [m1,m2]
scores = Query @ np.transpose(Key, [0, 2, 1])
print(np.shape(scores))
# qv+softmax -> attention
scores = soft_max(scores)
print(np.shape(scores))

# attention+v -> output
# [8,33,33] @ [8,33,96] -> [8,33,96] [m1,n] @ [n,m2] -> [m1,m2]
out = scores @ Value
print(np.shape(out))
# [8,33,96] -> [33,8,96]
out = np.transpose(out, [1, 0, 2])
print(np.shape(out))
# [33,8,96] -> [33,768]
out = np.reshape(out, [values_length , 768])
print(np.shape(out))
相关推荐
啊森要自信1 分钟前
CANN ops-cv:AI 硬件端视觉算法推理训练的算子性能调优与实战应用详解
人工智能·算法·cann
要加油哦~4 分钟前
AI | 实践教程 - ScreenCoder | 多agents前端代码生成
前端·javascript·人工智能
玄同7656 分钟前
从 0 到 1:用 Python 开发 MCP 工具,让 AI 智能体拥有 “超能力”
开发语言·人工智能·python·agent·ai编程·mcp·trae
新缸中之脑8 分钟前
用RedisVL构建长期记忆
人工智能
J_Xiong011715 分钟前
【Agents篇】07:Agent 的行动模块——工具使用与具身执行
人工智能·ai agent
SEO_juper21 分钟前
13个不容错过的SEO技巧,让您的网站可见度飙升
人工智能·seo·数字营销
小瑞瑞acd22 分钟前
【小瑞瑞精讲】卷积神经网络(CNN):从入门到精通,计算机如何“看”懂世界?
人工智能·python·深度学习·神经网络·机器学习
CoderJia程序员甲32 分钟前
GitHub 热榜项目 - 日榜(2026-02-06)
人工智能·ai·大模型·github·ai教程
wukangjupingbb37 分钟前
AI多模态技术在创新药研发中的结合路径、机制及挑战
人工智能
CoderIsArt1 小时前
三大主流智能体框架解析
人工智能