大模型系列——投机解码:Prompt Lookup Decoding代码解读

官方代码见:GitHub - apoorvumang/prompt-lookup-decoding

UPDATE 2 : This method is now available in vLLM as well by setting speculative_model="[ngram]" 🥳

UPDATE : This has been added to the transformers library. Please see this for a code example, or simply add prompt_lookup_num_tokens=10 to your model.generate(...) call.

TLDR : We modify speculative decoding where we replace the draft model with simple string matching in the prompt to generate candidate token sequences. This results in significant speedups (2x-4x) in input-grounded tasks, with no effect on output quality. This method can be used with any decoder model without model changes or external datastore, and with both greedy and sampling techniques.

Intuition : In several LLM use cases where you're doing input grounded generation (summarization, document QA, multi-turn chat, code editing), there is high n-gram overlap between LLM input (prompt) and LLM output. This could be entity names, phrases, or code chunks that the LLM directly copies from the input while generating the output. Prompt lookup exploits this pattern to speed up autoregressive decoding in LLMs.

python 复制代码
def find_candidate_pred_tokens(input_ids, max_ngram_size=3, num_pred_tokens=10):
    input_length = input_ids.size(1)

    for ngram_size in range(max_ngram_size, 0, -1):
        # Extract the last n tokens as our search ngram
        ngram = input_ids[0, -ngram_size:].tolist()

        # Create sliding windows of size ngram_size
        windows = input_ids.unfold(dimension=1, size=ngram_size, step=1)

        # Convert ngram to a tensor for comparison
        ngram_tensor = torch.tensor(ngram, device=input_ids.device).unsqueeze(0)

        # Find where the windows match the ngram
        matches = (windows == ngram_tensor).all(dim=2)

        # Get the indices of matches
        match_indices = matches.nonzero(as_tuple=True)[1]

        # Iterate through match indices to find a valid continuation
        for idx in match_indices:
            start_idx = idx + ngram_size
            end_idx = start_idx + num_pred_tokens
            # Ensure we don't go beyond the length of input_ids and avoid self-match
            if end_idx <= input_length and start_idx < input_length - ngram_size:
                return input_ids[0, start_idx:end_idx]

    # If no match is found, return an empty tensor
    return torch.tensor([], dtype=torch.long, device=input_ids.device)

ODOs/Thoughts/Future work

  • There's probably better ways to do stringmatching than the current one, and there are several obvious things to improve eg. what to do when there are multiple matches? Whats the ideal length of continuation?
  • We haven't yet tried sampling, although there's no reason it shouldn't work.
    • Here, one additional thing to test would be whether prompt lookup while sampling can affect hallucination rates, since this artifically increases probability of sampling exact sequences from input (this was suggest by my colleague Shwetha S)
  • Testing actual FLOPs impact and tradeoffs is needed
  • Also need to figure out best hyperparams - 3 and 10 were chosen on very little testing
  • It would be an interesting challenge to design the "best lookup function" for decoding, could even be a competition?

这个方法可能还是有问题的,正如坐着所说,可能存在幻觉,不一定ngram匹配上的就能加速

相关推荐
semantist@语校3 天前
第二十篇|SAMU教育学院的教育数据剖析:制度阈值、能力矩阵与升学网络
大数据·数据库·人工智能·百度·语言模型·矩阵·prompt
zzywxc7873 天前
AI工具全景洞察:从智能编码到模型训练的全链路剖析
人工智能·spring·ios·prompt·ai编程
fanstuck3 天前
Prompt提示工程上手指南(六):AI避免“幻觉”(Hallucination)策略下的Prompt
人工智能·语言模型·自然语言处理·nlp·prompt
m0_603888713 天前
Calibrating MLLM-as-a-judge via Multimodal Bayesian Prompt Ensembles
ai·prompt·论文速览
C7211BA4 天前
SGLang简介
llm·prompt·cot
zzywxc7874 天前
自动化测试框架是软件测试的核心基础设施,通过预设规则和脚本自动执行测试用例,显著提高测试效率和覆盖率。
运维·人工智能·自动化·prompt·测试用例·流程图
semantist@语校4 天前
第十九篇|东京世界日本语学校的结构数据建模:制度函数、能力矩阵与升学图谱
数据库·人工智能·线性代数·矩阵·prompt·github·数据集
山海青风5 天前
12 Prompt 模板化与参数化
人工智能·prompt
山海青风5 天前
11 Prompt 工程进阶:Few-shot 与 Chain-of-Thought
人工智能·prompt
人工智能培训5 天前
AI提示词(Prompt)基础核心知识点
大模型·prompt·提示词·input