关于Transformer的理解

关于Transformer, QKV的意义表示其更像是一个可学习的查询系统,或许以前搜索引擎的算法就与此有关或者某个分支的搜索算法与此类似。


Can anyone help me to understand this image? - #2 by J_Johnson - nlp - PyTorch Forums

Embeddings - these are learnable weights where each token(token could be a word, sentence piece, subword, character, etc) are converted into a vector, say, with 500 values between 0 and 1 that are trainable.

Positional Encoding - for each token, we want to inform the model where it's located, orderwise. This is because linear layers are not ideal for handling sequential information. So we manually pass this in by adding a vector of sine and cosine values on the first 2 elements in the embedding vector.

This sequence of vectors goes through an attention layer, which basically is like a learnable digitized database search function with keys, queries and values. In this case, we are "searching" for the most likely next token.

The Feed Forward is just a basic linear layer, but is applied across each embedding in the sequence separately(i.e. 3 dim tensor instead of 2 dim).

Then the final Linear layer is where we want to get out our predicted next token in the form of a vector of probabilities, which we apply a softmax to put the values in the range of 0 to 1.

There are two sides because when that diagram was developed, it was being used in language translations. But generative language models for next token prediction just use the Transformer decoder and not the encoder.

Here is a PyTorch tutorial that might help you go through how it works.

Language Modeling with nn.Transformer and torchtext --- PyTorch Tutorials 2.0.1+cu117 documentation


相关推荐
我爱一条柴ya12 分钟前
【AI大模型】神经网络反向传播:核心原理与完整实现
人工智能·深度学习·神经网络·ai·ai编程
万米商云16 分钟前
企业物资集采平台解决方案:跨地域、多仓库、百部门——大型企业如何用一套系统管好百万级物资?
大数据·运维·人工智能
新加坡内哥谈技术20 分钟前
Google AI 刚刚开源 MCP 数据库工具箱,让 AI 代理安全高效地查询数据库
人工智能
慕婉030721 分钟前
深度学习概述
人工智能·深度学习
大模型真好玩22 分钟前
准确率飙升!GraphRAG如何利用知识图谱提升RAG答案质量(额外篇)——大规模文本数据下GraphRAG实战
人工智能·python·mcp
198923 分钟前
【零基础学AI】第30讲:生成对抗网络(GAN)实战 - 手写数字生成
人工智能·python·深度学习·神经网络·机器学习·生成对抗网络·近邻算法
6confim23 分钟前
AI原生软件工程师
人工智能·ai编程·cursor
阿里云大数据AI技术24 分钟前
Flink Forward Asia 2025 主旨演讲精彩回顾
大数据·人工智能·flink
i小溪25 分钟前
在使用 Docker 时,如果容器挂载的数据目录(如 `/var/moments`)位于数据盘,只要服务没有读写,数据盘是否就不会被唤醒?
人工智能·docker
程序员NEO27 分钟前
Spring AI 对话记忆大揭秘:服务器重启,聊天记录不再丢失!
人工智能·后端