llama2c(4)之forward、sample、decode

1、forward

c 复制代码
float* logits = forward(transformer, token, pos);
输入transformer的参数,当前token,pos位置,预测出下一个token的预测值(用矩阵乘,加减乘除等运算构成Transformer)
(gdb) p *logits
$9 = 2.19071054
c 复制代码
// attention rmsnorm
rmsnorm(s->xb, x, w->rms_att_weight + l*dim, dim);
// qkv matmuls for this position
quantize(&s->xq, s->xb, dim);
matmul(s->q, &s->xq, w->wq + l, dim, dim);
c 复制代码
(gdb) ptype s->xb
type = float *

量化是输入是确保与权重一样的数据类型

2、sample

2.1 未进入

c 复制代码
if (pos < num_prompt_tokens - 1) {
            // if we are still processing the input prompt, force the next prompt token
            next = prompt_tokens[pos + 1];
        } else {
            // otherwise sample the next token from the logits
            next = sample(sampler, logits);
        }

**确定next,**如果还在input prompt,那么下一个token就是next;不是,才用sample得出next

即执行

c 复制代码
next = prompt_tokens[pos + 1];

c 复制代码
(gdb) p pos
$10 = 0
(gdb) p next
$11 = 15043  //Hello

2.2 进入

c 复制代码
(gdb) p *logits
$20 = 0.657589614
int sample(Sampler* sampler, float* logits) {
    // sample the token given the logits and some hyperparameters
    int next;
    if (sampler->temperature == 0.0f) {
        // greedy argmax sampling: take the token with the highest probability
        next = sample_argmax(logits, sampler->vocab_size);
    } else {
        // apply the temperature to the logits
        for (int q=0; q<sampler->vocab_size; q++) { logits[q] /= sampler->temperature; }
        // apply softmax to the logits to get the probabilities for next token
        softmax(logits, sampler->vocab_size);
        // flip a (float) coin (this is our source of entropy for sampling)
        float coin = random_f32(&sampler->rng_state);
        // we sample from this distribution to get the next token
        if (sampler->topp <= 0 || sampler->topp >= 1) {
            // simply sample from the predicted probability distribution
            next = sample_mult(logits, sampler->vocab_size, coin);
        } else {
            // top-p (nucleus) sampling, clamping the least likely tokens to zero
            next = sample_topp(logits, sampler->vocab_size, sampler->topp, sampler->probindex, coin);
        }
    }
    return next;
}

3、decode

token=1,next=15043

c 复制代码
调用
char* piece = decode(tokenizer, token, next);
定义
char* decode(Tokenizer* t, int prev_token, int token)
{
    char *piece = t->vocab[token];   //Hello
    // following BOS (1) token, sentencepiece decoder strips any leading whitespace (see PR #89)
    if (prev_token == 1 && piece[0] == ' ') { piece++; }
    // careful, some tokens designate raw bytes, and look like e.g. '<0x01>'
    // parse this and convert and return the actual byte
    unsigned char byte_val;
    if (sscanf(piece, "<0x%02hhX>", &byte_val) == 1) {
        piece = (char*)t->byte_pieces + byte_val * 2;
    }
    return piece;
}
(gdb) p piece
$17 = 0x55ae4f286661 "Hello"
相关推荐
在繁华处7 分钟前
C语言经典算法:汉诺塔问题
c语言·算法
Bona Sun37 分钟前
单片机手搓掌上游戏机(十一)—esp8266运行gameboy模拟器之硬件连接
c语言·c++·单片机·游戏机
酸钠鈀1 小时前
模拟IIC通讯 基于状态机
c语言
橘子真甜~3 小时前
C/C++ Linux网络编程6 - poll解决客户端并发连接问题
服务器·c语言·开发语言·网络·c++·poll
java_logo4 小时前
LOBE-CHAT Docker 容器化部署指南
运维·docker·语言模型·容器·llama
小年糕是糕手5 小时前
【C++】C++入门 -- 输入&输出、缺省参数
c语言·开发语言·数据结构·c++·算法·leetcode·排序算法
Star在努力6 小时前
C语言复习八(2025.11.18)
c语言·算法·排序算法
赖small强6 小时前
【Linux C/C++开发】第26章:系统级综合项目理论
linux·c语言·c++
AI大模型7 小时前
手把手教你用LlamaIndex搭建RAG系统,让LLM告别“幻觉”,提升回答质量!
llm·agent·llama
仟濹7 小时前
【C/C++】经典高精度算法 5道题 加减乘除「复习」
c语言·c++·算法