关于Transformer的理解

关于Transformer, QKV的意义表示其更像是一个可学习的查询系统,或许以前搜索引擎的算法就与此有关或者某个分支的搜索算法与此类似。


Can anyone help me to understand this image? - #2 by J_Johnson - nlp - PyTorch Forums

Embeddings - these are learnable weights where each token(token could be a word, sentence piece, subword, character, etc) are converted into a vector, say, with 500 values between 0 and 1 that are trainable.

Positional Encoding - for each token, we want to inform the model where it's located, orderwise. This is because linear layers are not ideal for handling sequential information. So we manually pass this in by adding a vector of sine and cosine values on the first 2 elements in the embedding vector.

This sequence of vectors goes through an attention layer, which basically is like a learnable digitized database search function with keys, queries and values. In this case, we are "searching" for the most likely next token.

The Feed Forward is just a basic linear layer, but is applied across each embedding in the sequence separately(i.e. 3 dim tensor instead of 2 dim).

Then the final Linear layer is where we want to get out our predicted next token in the form of a vector of probabilities, which we apply a softmax to put the values in the range of 0 to 1.

There are two sides because when that diagram was developed, it was being used in language translations. But generative language models for next token prediction just use the Transformer decoder and not the encoder.

Here is a PyTorch tutorial that might help you go through how it works.

Language Modeling with nn.Transformer and torchtext --- PyTorch Tutorials 2.0.1+cu117 documentation


相关推荐
YYXZZ。。1 小时前
PyTorch——搭建小实战和Sequential的使用(7)
人工智能·pytorch·python
四川兔兔1 小时前
pytorch 与 张量的处理
人工智能·pytorch·python
AI蜗牛之家5 小时前
Qwen系列之Qwen3解读:最强开源模型的细节拆解
人工智能·python
王上上5 小时前
【论文阅读30】Bi-LSTM(2024)
论文阅读·人工智能·lstm
殇者知忧5 小时前
【论文笔记】若干矿井粉尘检测算法概述
深度学习·神经网络·算法·随机森林·机器学习·支持向量机·计算机视觉
YunTM5 小时前
贝叶斯优化+LSTM+时序预测=Nature子刊!
人工智能·机器学习
舒一笑7 小时前
智能体革命:企业如何构建自主决策的AI代理?
人工智能
丁先生qaq7 小时前
热成像实例分割电力设备数据集(3类,838张)
人工智能·计算机视觉·目标跟踪·数据集
红衣小蛇妖8 小时前
神经网络-Day45
人工智能·深度学习·神经网络
JoannaJuanCV8 小时前
BEV和OCC学习-5:数据预处理流程
深度学习·目标检测·3d·occ·bev