大模型系列——投机解码:Prompt Lookup Decoding代码解读

官方代码见:GitHub - apoorvumang/prompt-lookup-decoding

UPDATE 2 : This method is now available in vLLM as well by setting speculative_model="[ngram]" 🥳

UPDATE : This has been added to the transformers library. Please see this for a code example, or simply add prompt_lookup_num_tokens=10 to your model.generate(...) call.

TLDR : We modify speculative decoding where we replace the draft model with simple string matching in the prompt to generate candidate token sequences. This results in significant speedups (2x-4x) in input-grounded tasks, with no effect on output quality. This method can be used with any decoder model without model changes or external datastore, and with both greedy and sampling techniques.

Intuition : In several LLM use cases where you're doing input grounded generation (summarization, document QA, multi-turn chat, code editing), there is high n-gram overlap between LLM input (prompt) and LLM output. This could be entity names, phrases, or code chunks that the LLM directly copies from the input while generating the output. Prompt lookup exploits this pattern to speed up autoregressive decoding in LLMs.

python 复制代码
def find_candidate_pred_tokens(input_ids, max_ngram_size=3, num_pred_tokens=10):
    input_length = input_ids.size(1)

    for ngram_size in range(max_ngram_size, 0, -1):
        # Extract the last n tokens as our search ngram
        ngram = input_ids[0, -ngram_size:].tolist()

        # Create sliding windows of size ngram_size
        windows = input_ids.unfold(dimension=1, size=ngram_size, step=1)

        # Convert ngram to a tensor for comparison
        ngram_tensor = torch.tensor(ngram, device=input_ids.device).unsqueeze(0)

        # Find where the windows match the ngram
        matches = (windows == ngram_tensor).all(dim=2)

        # Get the indices of matches
        match_indices = matches.nonzero(as_tuple=True)[1]

        # Iterate through match indices to find a valid continuation
        for idx in match_indices:
            start_idx = idx + ngram_size
            end_idx = start_idx + num_pred_tokens
            # Ensure we don't go beyond the length of input_ids and avoid self-match
            if end_idx <= input_length and start_idx < input_length - ngram_size:
                return input_ids[0, start_idx:end_idx]

    # If no match is found, return an empty tensor
    return torch.tensor([], dtype=torch.long, device=input_ids.device)

ODOs/Thoughts/Future work

  • There's probably better ways to do stringmatching than the current one, and there are several obvious things to improve eg. what to do when there are multiple matches? Whats the ideal length of continuation?
  • We haven't yet tried sampling, although there's no reason it shouldn't work.
    • Here, one additional thing to test would be whether prompt lookup while sampling can affect hallucination rates, since this artifically increases probability of sampling exact sequences from input (this was suggest by my colleague Shwetha S)
  • Testing actual FLOPs impact and tradeoffs is needed
  • Also need to figure out best hyperparams - 3 and 10 were chosen on very little testing
  • It would be an interesting challenge to design the "best lookup function" for decoding, could even be a competition?

这个方法可能还是有问题的,正如坐着所说,可能存在幻觉,不一定ngram匹配上的就能加速

相关推荐
飞Link6 小时前
大模型时代的“语言编程”:Prompt Engineering (提示词工程) 深度解析与实战指南
开发语言·python·prompt
zzb158013 小时前
系统提示词-System Prompt 动态组装
人工智能·后端·python·prompt
小橙子学AI14 小时前
AI 编程的 Prompt 工程:如何写出高质量指令
人工智能·prompt
小林学编程15 小时前
模型上下文协议(MCP)的理解
java·后端·llm·prompt·resource·tool·mcp协议
chQHk57BN1 天前
解密Prompt系列69. 从上下文管理到Runtime操作系统
prompt
北邮刘老师1 天前
暗数据:智能体探索世界的下一步
人工智能·大模型·prompt·智能体·智能体互联网
Flying pigs~~1 天前
从“踩坑”到“可控”:大模型 Prompt 工程实战总结与进阶方法论
大数据·人工智能·大模型·prompt·提示词工程
前端达人2 天前
第09课:10个高频场景 Prompt 模板库,复制、改几个词、直接用
prompt
最初的↘那颗心2 天前
结构化Prompt与Meta Prompt实战——让AI输出你想要的格式
大模型·prompt·spring ai·结构化输出·meta prompt
最初的↘那颗心2 天前
Prompt基础功:角色分工与样本策略——System Prompt与Few-Shot实战
大模型·llm·prompt·few-shot·spring ai