【大语言模型_1】VLLM部署Qwen模型

1、模型下载:

魔塔社区:魔搭社区

huggingface:https://huggingface.co/Qwen

2、安装python环境

1、python官网安装python 【推荐要3.8以上版本】

2、安装vllm模块

3、启动模型

CUDA_VISIBLE_DEVICES=0,1 /root/vendor/Python3.10.12/bin/python3.10 -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 25010 --served-model-name mymodel --model //root/qwen2.5/qwen2.5-coder-7b-instruct/ --tensor-parallel-size 2 --max-model-len 8096

出现以下内容代表运行成功

INFO 09-20 15:22:59 model_runner.py:1335] Graph capturing finished in 11 secs.
(VllmWorkerProcess pid=101403) INFO 09-20 15:22:59 model_runner.py:1335] Graph capturing finished in 11 secs.
INFO 09-20 15:22:59 api_server.py:224] vLLM to use /tmp/tmplc42ak3s as PROMETHEUS_MULTIPROC_DIR
WARNING 09-20 15:22:59 serving_embedding.py:190] embedding_mode is False. Embedding API will not work.
INFO 09-20 15:22:59 launcher.py:20] Available routes are:
INFO 09-20 15:22:59 launcher.py:28] Route: /openapi.json, Methods: HEAD, GET
INFO 09-20 15:22:59 launcher.py:28] Route: /docs, Methods: HEAD, GET
INFO 09-20 15:22:59 launcher.py:28] Route: /docs/oauth2-redirect, Methods: HEAD, GET
INFO 09-20 15:22:59 launcher.py:28] Route: /redoc, Methods: HEAD, GET
INFO 09-20 15:22:59 launcher.py:28] Route: /health, Methods: GET
INFO 09-20 15:22:59 launcher.py:28] Route: /tokenize, Methods: POST
INFO 09-20 15:22:59 launcher.py:28] Route: /detokenize, Methods: POST
INFO 09-20 15:22:59 launcher.py:28] Route: /v1/models, Methods: GET
INFO 09-20 15:22:59 launcher.py:28] Route: /version, Methods: GET
INFO 09-20 15:22:59 launcher.py:28] Route: /v1/chat/completions, Methods: POST
INFO 09-20 15:22:59 launcher.py:28] Route: /v1/completions, Methods: POST
INFO 09-20 15:22:59 launcher.py:28] Route: /v1/embeddings, Methods: POST
INFO 09-20 15:22:59 launcher.py:33] Launching Uvicorn with --limit_concurrency 32765. To avoid this limit at the expense of performance run with --disable-frontend-multiprocessing
INFO:     Started server process [101179]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:25010

4、利用python脚本调用测试

python 复制代码
from openai import OpenAI

# 初始化客户端
client = OpenAI(base_url="http://localhost:25010/v1", api_key="EMPTY")

print("欢迎使用Qwen智能问答机器人!输入'退出'以结束对话。")

while True:
    # 获取用户输入
    print("您: ", end='', flush=True)
    user_input = input()
    if user_input.lower() in ['退出', '再见', '拜拜']:
        print("qwen: 再见!期待下次与您交谈。")
        break

    # 构造消息列表
    messages = [
        {"role": "system", "content": "你的角色是名为"qwen"的智能问答机器人"},
        {"role": "user", "content": user_input}
    ]

    try:
        # 发送请求并获取回复
        chat_completion = client.chat.completions.create(
            model="mymodel",
            messages=messages,
            #stop=[ "。"],
            stop=["<|endoftext|>", "<|im_end|>", "<|im_start|>"],
            stream = False,
        )

        # 打印模型回复
        print("qwen:", chat_completion.choices[0].message.content)

    except Exception as e:
        print("出现错误: {e}",e)
        print("请稍后再试或检查您的网络连接及API配置。")
相关推荐
紫雾凌寒28 分钟前
计算机视觉 |解锁视频理解三剑客——TimeSformer
python·深度学习·神经网络·计算机视觉·transformer·timesformer
Toky丶30 分钟前
【文献阅读】A Survey on Model Compression for Large Language Models
人工智能·语言模型·自然语言处理
Francek Chen1 小时前
【大模型科普】AIGC技术发展与应用实践(一文读懂AIGC)
人工智能·深度学习·语言模型·大模型·aigc
程序员杰哥1 小时前
测试用例详解
自动化测试·软件测试·python·功能测试·测试工具·职场和发展·测试用例
go54631584657 小时前
本地部署 GitHub 上的 Python 人脸识别项目
开发语言·python·github
FreakStudio8 小时前
手把手教你用 MicroPython 玩转幻尔串口舵机,代码+教程全公开
python·嵌入式·大学生·面向对象·技术栈·电子diy·电子计算机
tekin8 小时前
基于 Python 开发在线多人游戏服务器案例解析
服务器·python·游戏·在线多人游戏服务器
让学习成为一种生活方式10 小时前
libGL.so.1: cannot open shared object file: No such file or directory-linux022
linux·开发语言·python
java1234_小锋10 小时前
一周学会Flask3 Python Web开发-Jinja2模板继承和include标签使用
python·flask·flask3
图书馆钉子户10 小时前
from flask_session import Session 为什么是Session(app)这么用?
python·flask·mybatis