大模型系列——投机解码:Prompt Lookup Decoding代码解读

官方代码见:GitHub - apoorvumang/prompt-lookup-decoding

UPDATE 2 : This method is now available in vLLM as well by setting speculative_model="[ngram]" 🥳

UPDATE : This has been added to the transformers library. Please see this for a code example, or simply add prompt_lookup_num_tokens=10 to your model.generate(...) call.

TLDR : We modify speculative decoding where we replace the draft model with simple string matching in the prompt to generate candidate token sequences. This results in significant speedups (2x-4x) in input-grounded tasks, with no effect on output quality. This method can be used with any decoder model without model changes or external datastore, and with both greedy and sampling techniques.

Intuition : In several LLM use cases where you're doing input grounded generation (summarization, document QA, multi-turn chat, code editing), there is high n-gram overlap between LLM input (prompt) and LLM output. This could be entity names, phrases, or code chunks that the LLM directly copies from the input while generating the output. Prompt lookup exploits this pattern to speed up autoregressive decoding in LLMs.

python 复制代码
def find_candidate_pred_tokens(input_ids, max_ngram_size=3, num_pred_tokens=10):
    input_length = input_ids.size(1)

    for ngram_size in range(max_ngram_size, 0, -1):
        # Extract the last n tokens as our search ngram
        ngram = input_ids[0, -ngram_size:].tolist()

        # Create sliding windows of size ngram_size
        windows = input_ids.unfold(dimension=1, size=ngram_size, step=1)

        # Convert ngram to a tensor for comparison
        ngram_tensor = torch.tensor(ngram, device=input_ids.device).unsqueeze(0)

        # Find where the windows match the ngram
        matches = (windows == ngram_tensor).all(dim=2)

        # Get the indices of matches
        match_indices = matches.nonzero(as_tuple=True)[1]

        # Iterate through match indices to find a valid continuation
        for idx in match_indices:
            start_idx = idx + ngram_size
            end_idx = start_idx + num_pred_tokens
            # Ensure we don't go beyond the length of input_ids and avoid self-match
            if end_idx <= input_length and start_idx < input_length - ngram_size:
                return input_ids[0, start_idx:end_idx]

    # If no match is found, return an empty tensor
    return torch.tensor([], dtype=torch.long, device=input_ids.device)

ODOs/Thoughts/Future work

  • There's probably better ways to do stringmatching than the current one, and there are several obvious things to improve eg. what to do when there are multiple matches? Whats the ideal length of continuation?
  • We haven't yet tried sampling, although there's no reason it shouldn't work.
    • Here, one additional thing to test would be whether prompt lookup while sampling can affect hallucination rates, since this artifically increases probability of sampling exact sequences from input (this was suggest by my colleague Shwetha S)
  • Testing actual FLOPs impact and tradeoffs is needed
  • Also need to figure out best hyperparams - 3 and 10 were chosen on very little testing
  • It would be an interesting challenge to design the "best lookup function" for decoding, could even be a competition?

这个方法可能还是有问题的,正如坐着所说,可能存在幻觉,不一定ngram匹配上的就能加速

相关推荐
正宗咸豆花2 小时前
开源提示词管理平台PromptMinder使用体验
人工智能·开源·prompt
Lilith的AI学习日记2 小时前
AI提示词(Prompt)终极指南:从入门到精通(附实战案例)
大数据·人工智能·prompt·aigc·deepseek
AndCo1 天前
Prompt 结构化提示工程
prompt
layneyao1 天前
大语言模型(LLM)的Prompt Engineering:从入门到精通
人工智能·语言模型·prompt
何双新2 天前
L2-1、打造稳定可控的 AI 输出 —— Prompt 模板与格式控制
人工智能·prompt
何双新3 天前
L3-3、从单轮到链式任务:设计协作型 Prompt 系统
服务器·搜索引擎·prompt
RockLiu@8053 天前
大模型技术全景解析:从基础架构到Prompt工程
语言模型·prompt
带娃的IT创业者4 天前
《AI大模型应知应会100篇》第35篇:Prompt链式调用:解决复杂问题的策略
android·人工智能·prompt
何双新5 天前
L1-5、Prompt 写作中的常见误区
人工智能·prompt
何双新5 天前
L1-7、Prompt 的“调试技巧”
人工智能·prompt