NLP(六十四)使用FastChat计算LLaMA-2模型的token长度

LLaMA-2模型部署

在文章NLP(五十九)使用FastChat部署百川大模型中,笔者介绍了FastChat框架,以及如何使用FastChat来部署百川模型。

本文将会部署LLaMA-2 70B模型,使得其兼容OpenAI的调用风格。部署的Dockerfile文件如下:

yaml 复制代码
FROM nvidia/cuda:11.7.1-runtime-ubuntu20.04

RUN apt-get update -y && apt-get install -y python3.9 python3.9-distutils curl
RUN curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
RUN python3.9 get-pip.py
RUN pip3 install fschat

Docker-compose.yml文件如下:

yml 复制代码
version: "3.9"

services:
  fastchat-controller:
    build:
      context: .
      dockerfile: Dockerfile
    image: fastchat:latest
    ports:
      - "21001:21001"
    entrypoint: ["python3.9", "-m", "fastchat.serve.controller", "--host", "0.0.0.0", "--port", "21001"]

  fastchat-model-worker:
    build:
      context: .
      dockerfile: Dockerfile
    volumes:
      - ./model:/root/model
    image: fastchat:latest
    ports:
      - "21002:21002"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              device_ids: ['0', '1']
              capabilities: [gpu]
    entrypoint: ["python3.9", "-m", "fastchat.serve.model_worker", "--model-names", "llama2-70b-chat", "--model-path", "/root/model/llama2/Llama-2-70b-chat-hf", "--num-gpus", "2", "--gpus",  "0,1", "--worker-address", "http://fastchat-model-worker:21002", "--controller-address", "http://fastchat-controller:21001", "--host", "0.0.0.0", "--port", "21002"]

  fastchat-api-server:
    build:
      context: .
      dockerfile: Dockerfile
    image: fastchat:latest
    ports:
      - "8000:8000"
    entrypoint: ["python3.9", "-m", "fastchat.serve.openai_api_server", "--controller-address", "http://fastchat-controller:21001", "--host", "0.0.0.0", "--port", "8000"]

部署成功后,会占用2张A100,每张A100占用约66G显存。

测试模型是否部署成功:

bash 复制代码
curl http://localhost:8000/v1/models

输出结果如下:

json 复制代码
{
  "object": "list",
  "data": [
    {
      "id": "llama2-70b-chat",
      "object": "model",
      "created": 1691504717,
      "owned_by": "fastchat",
      "root": "llama2-70b-chat",
      "parent": null,
      "permission": [
        {
          "id": "modelperm-3XG6nzMAqfEkwfNqQ52fdv",
          "object": "model_permission",
          "created": 1691504717,
          "allow_create_engine": false,
          "allow_sampling": true,
          "allow_logprobs": true,
          "allow_search_indices": true,
          "allow_view": true,
          "allow_fine_tuning": false,
          "organization": "*",
          "group": null,
          "is_blocking": false
        }
      ]
    }
  ]
}

部署LLaMA-2 70B模型成功!

Prompt token长度计算

FastChat的Github开源项目中,项目提供了计算Prompt的token长度的API,文件路径为:fastchat/serve/model_worker.py,调用方法为:

curl 复制代码
curl --location 'localhost:21002/count_token' \
--header 'Content-Type: application/json' \
--data '{"prompt": "What is your name?"}'

输出结果如下:

json 复制代码
{
  "count": 6,
  "error_code": 0
}

Conversation token长度计算

FastChat中计算Conversation(对话)的token长度较为麻烦。

首先我们需要获取LLaMA-2 70B模型的对话配置,调用API如下:

bash 复制代码
curl --location --request POST 'http://localhost:21002/worker_get_conv_template'

输出结果如下:

json 复制代码
{'conv': {'messages': [],
          'name': 'llama-2',
          'offset': 0,
          'roles': ['[INST]', '[/INST]'],
          'sep': ' ',
          'sep2': ' </s><s>',
          'sep_style': 7,
          'stop_str': None,
          'stop_token_ids': [2],
          'system_message': 'You are a helpful, respectful and honest '
                            'assistant. Always answer as helpfully as '
                            'possible, while being safe. Your answers should '
                            'not include any harmful, unethical, racist, '
                            'sexist, toxic, dangerous, or illegal content. '
                            'Please ensure that your responses are socially '
                            'unbiased and positive in nature.\n'
                            '\n'
                            'If a question does not make any sense, or is not '
                            'factually coherent, explain why instead of '
                            "answering something not correct. If you don't "
                            "know the answer to a question, please don't share "
                            'false information.',
          'system_template': '[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n'}}

FastChat中的对话文件(fastchat/conversation.py)中,提供了对话加工的代码,这里不再展示,使用时直接复制整个文件即可,该文件不依赖任何第三方模块。

我们需要将对话按照OpenAI的方式加工成对应的Prompt,输入的对话(messages)如下:

messages = [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999!"}, {"role": "user", "content": "Where is your hometown?"}]

Python代码如下:

python 复制代码
# -*- coding: utf-8 -*-
# @place: Pudong, Shanghai 
# @file: prompt.py
# @time: 2023/8/8 19:24
from conversation import Conversation, SeparatorStyle

messages = [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999!"}, {"role": "user", "content": "Where is your hometown?"}]

llama2_conv = {"conv":{"name":"llama-2","system_template":"[INST] <<SYS>>\n{system_message}\n<</SYS>>\n\n","system_message":"You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.","roles":["[INST]","[/INST]"],"messages":[],"offset":0,"sep_style":7,"sep":" ","sep2":" </s><s>","stop_str":None,"stop_token_ids":[2]}}
conv = llama2_conv['conv']

conv = Conversation(
        name=conv["name"],
        system_template=conv["system_template"],
        system_message=conv["system_message"],
        roles=conv["roles"],
        messages=list(conv["messages"]),  # prevent in-place modification
        offset=conv["offset"],
        sep_style=SeparatorStyle(conv["sep_style"]),
        sep=conv["sep"],
        sep2=conv["sep2"],
        stop_str=conv["stop_str"],
        stop_token_ids=conv["stop_token_ids"],
    )

if isinstance(messages, str):
    prompt = messages
else:
    for message in messages:
        msg_role = message["role"]
        if msg_role == "system":
            conv.set_system_message(message["content"])
        elif msg_role == "user":
            conv.append_message(conv.roles[0], message["content"])
        elif msg_role == "assistant":
            conv.append_message(conv.roles[1], message["content"])
        else:
            raise ValueError(f"Unknown role: {msg_role}")

    # Add a blank message for the assistant.
    conv.append_message(conv.roles[1], None)
    prompt = conv.get_prompt()

print(repr(prompt))

加工后的Prompt如下:

复制代码
"[INST] <<SYS>>\nYou are Jack, you are 20 years old, answer questions with humor.\n<</SYS>>\n\nWhat is your name?[/INST]  Well, well, well! Look who's asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend! </s><s>[INST] How old are you? [/INST]  Oh, you want to know my age? Well, let's just say I'm older than a bottle of wine but younger than a bottle of whiskey. I'm like a fine cheese, getting better with age, but still young enough to party like it's 1999! </s><s>[INST] Where is your hometown? [/INST]"

最后再调用计算Prompt的API(参考上节的Prompt token长度计算),输出该对话的token长度为199.

我们使用FastChat提供的对话补充接口(v1/chat/completions)验证输入的对话token长度,请求命令为:

bash 复制代码
curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
    "model": "llama2-70b-chat",
    "messages": [{"role": "system", "content": "You are Jack, you are 20 years old, answer questions with humor."}, {"role": "user", "content": "What is your name?"},{"role": "assistant", "content": " Well, well, well! Look who'\''s asking the questions now! My name is Jack, but you can call me the king of the castle, the lord of the rings, or the prince of the pizza party. Whatever floats your boat, my friend!"}, {"role": "user", "content": "How old are you?"}, {"role": "assistant", "content": " Oh, you want to know my age? Well, let'\''s just say I'\''m older than a bottle of wine but younger than a bottle of whiskey. I'\''m like a fine cheese, getting better with age, but still young enough to party like it'\''s 1999!"}, {"role": "user", "content": "Where is your hometown?"}]
}'

输出结果为:

json 复制代码
{
    "id": "chatcmpl-mQxcaQcNSNMFahyHS7pamA",
    "object": "chat.completion",
    "created": 1691506768,
    "model": "llama2-70b-chat",
    "choices": [
        {
            "index": 0,
            "message": {
                "role": "assistant",
                "content": " Ha! My hometown? Well, that's a tough one. I'm like a bird, I don't have a nest, I just fly around and land wherever the wind takes me. But if you really want to know, I'm from a place called \"The Internet\". It's a magical land where memes and cat videos roam free, and the Wi-Fi is always strong. It's a beautiful place, you should visit sometime!"
            },
            "finish_reason": "stop"
        }
    ],
    "usage": {
        "prompt_tokens": 199,
        "total_tokens": 302,
        "completion_tokens": 103
    }
}

注意,输出的prompt_tokens为199,这与我们刚才计算的对话token长度的结果是一致的!

总结

本文主要介绍了如何在FastChat中部署LLaMA-2 70B模型,并详细介绍了Prompt token长度计算以及对话(conversation)的token长度计算。希望能对读者有所帮助~

笔者的一点心得是:阅读源码真的很重要。

笔者的个人博客网址为:https://percent4.github.io/ ,欢迎大家访问~

参考网址

  1. NLP(五十九)使用FastChat部署百川大模型: https://blog.csdn.net/jclian91/article/details/131650918
  2. FastChat: https://github.com/lm-sys/FastChat
相关推荐
微爱帮监所写信寄信1 分钟前
微爱帮监狱寄信写信系统后台PHP框架优化实战手册
android·开发语言·人工智能·网络协议·微信·https·php
民乐团扒谱机5 分钟前
【微实验】量子光梳技术革命:如何用量子压缩突破时频传递的终极极限?
人工智能·敏感性分析·量子力学·双梳
MARS_AI_7 分钟前
AI呼叫中心革命:大模型技术如何重构企业服务体验
人工智能·科技·自然语言处理·信息与通信·agi
EEPI15 分钟前
【论文阅读】Vision Language Models are In-Context Value Learners
论文阅读·人工智能·语言模型
金融Tech趋势派16 分钟前
2026企业微信私有化部署新选择:微盛·企微管家如何助力企业数据安全与运营效率提升?
大数据·人工智能·云计算·企业微信
短视频矩阵源码定制17 分钟前
专业的矩阵系统哪个公司好
大数据·人工智能·矩阵
jimmyleeee18 分钟前
人工智能基础知识笔记三十:模型的混合量化策略
人工智能·笔记
Gofarlic_oms120 分钟前
Cadence许可证全生命周期数据治理方案
java·大数据·运维·开发语言·人工智能·安全·自动化
Nautiluss29 分钟前
一起调试XVF3800麦克风阵列(三)
linux·人工智能·嵌入式硬件·音频·语音识别·dsp开发·智能音箱
ShenZhenDingYue30 分钟前
电力智能安全警示器全面解析:构建智能电力安全防护新体系
人工智能·输电线路·电力警示·有电危险