关于Transformer的理解

关于Transformer, QKV的意义表示其更像是一个可学习的查询系统,或许以前搜索引擎的算法就与此有关或者某个分支的搜索算法与此类似。


Can anyone help me to understand this image? - #2 by J_Johnson - nlp - PyTorch Forums

Embeddings - these are learnable weights where each token(token could be a word, sentence piece, subword, character, etc) are converted into a vector, say, with 500 values between 0 and 1 that are trainable.

Positional Encoding - for each token, we want to inform the model where it's located, orderwise. This is because linear layers are not ideal for handling sequential information. So we manually pass this in by adding a vector of sine and cosine values on the first 2 elements in the embedding vector.

This sequence of vectors goes through an attention layer, which basically is like a learnable digitized database search function with keys, queries and values. In this case, we are "searching" for the most likely next token.

The Feed Forward is just a basic linear layer, but is applied across each embedding in the sequence separately(i.e. 3 dim tensor instead of 2 dim).

Then the final Linear layer is where we want to get out our predicted next token in the form of a vector of probabilities, which we apply a softmax to put the values in the range of 0 to 1.

There are two sides because when that diagram was developed, it was being used in language translations. But generative language models for next token prediction just use the Transformer decoder and not the encoder.

Here is a PyTorch tutorial that might help you go through how it works.

Language Modeling with nn.Transformer and torchtext --- PyTorch Tutorials 2.0.1+cu117 documentation


相关推荐
Python极客之家36 分钟前
基于深度学习的乳腺癌分类识别与诊断系统
人工智能·深度学习·分类
mftang1 小时前
TMR传感器的实现原理和特性介绍
人工智能
吃什么芹菜卷1 小时前
深度学习:词嵌入embedding和Word2Vec
人工智能·算法·机器学习
chnyi6_ya1 小时前
论文笔记:Online Class-Incremental Continual Learning with Adversarial Shapley Value
论文阅读·人工智能
中杯可乐多加冰1 小时前
【AI驱动TDSQL-C Serverless数据库技术实战】 AI电商数据分析系统——探索Text2SQL下AI驱动代码进行实际业务
c语言·人工智能·serverless·tdsql·腾讯云数据库
萱仔学习自我记录3 小时前
PEFT库和transformers库在NLP大模型中的使用和常用方法详解
人工智能·机器学习
BulingQAQ6 小时前
论文阅读:PET/CT Cross-modal medical image fusion of lung tumors based on DCIF-GAN
论文阅读·深度学习·生成对抗网络·计算机视觉·gan
hsling松子6 小时前
使用PaddleHub智能生成,献上浓情国庆福
人工智能·算法·机器学习·语言模型·paddlepaddle
正在走向自律6 小时前
机器学习框架
人工智能·机器学习
好吃番茄7 小时前
U mamba配置问题;‘KeyError: ‘file_ending‘
人工智能·机器学习