关于Transformer的理解

关于Transformer, QKV的意义表示其更像是一个可学习的查询系统,或许以前搜索引擎的算法就与此有关或者某个分支的搜索算法与此类似。


Can anyone help me to understand this image? - #2 by J_Johnson - nlp - PyTorch Forums

Embeddings - these are learnable weights where each token(token could be a word, sentence piece, subword, character, etc) are converted into a vector, say, with 500 values between 0 and 1 that are trainable.

Positional Encoding - for each token, we want to inform the model where it's located, orderwise. This is because linear layers are not ideal for handling sequential information. So we manually pass this in by adding a vector of sine and cosine values on the first 2 elements in the embedding vector.

This sequence of vectors goes through an attention layer, which basically is like a learnable digitized database search function with keys, queries and values. In this case, we are "searching" for the most likely next token.

The Feed Forward is just a basic linear layer, but is applied across each embedding in the sequence separately(i.e. 3 dim tensor instead of 2 dim).

Then the final Linear layer is where we want to get out our predicted next token in the form of a vector of probabilities, which we apply a softmax to put the values in the range of 0 to 1.

There are two sides because when that diagram was developed, it was being used in language translations. But generative language models for next token prediction just use the Transformer decoder and not the encoder.

Here is a PyTorch tutorial that might help you go through how it works.

Language Modeling with nn.Transformer and torchtext --- PyTorch Tutorials 2.0.1+cu117 documentation


相关推荐
Sagittarius_A*11 小时前
特征检测:SIFT 与 SURF(尺度不变 / 加速稳健特征)【计算机视觉】
图像处理·人工智能·python·opencv·计算机视觉·surf·sift
像风一样的男人@11 小时前
python --读取psd文件
开发语言·python·深度学习
FserSuN11 小时前
2026年AI工程师指南
人工智能
是枚小菜鸡儿吖11 小时前
CANN 的安全设计之道:AI 模型保护与隐私计算
人工智能
leo030811 小时前
科研领域主流机械臂排名
人工智能·机器人·机械臂·具身智能
人工智能AI技术11 小时前
GitHub Copilot免费替代方案:大学生如何用CodeGeeX+通义灵码搭建AI编程环境
人工智能
Chunyyyen11 小时前
【第三十四周】视觉RAG01
人工智能·chatgpt
大江东去浪淘尽千古风流人物11 小时前
【SLAM新范式】几何主导=》几何+学习+语义+高效表示的融合
深度学习·算法·slam
是枚小菜鸡儿吖11 小时前
CANN 算子开发黑科技:AI 自动生成高性能 Kernel 代码
人工智能·科技