华为昇腾310P废物利用——大模型推理服务

华为昇腾310P废物利用

注:310P不支持bf16、W4A4

带宽200G,双芯版的300I duo, 有48g和96g两种

目前市面上所有昇腾的卡均不支持FP8

最终性能优化结果:

Qwen3-8B-W8A8

TPS :15Tokens/s

昇腾的PyTorch图模式使用和vllm-ascend的源码,里面有reduce-overhead和max-autotune两种模式,reduce-overhead只支持910B和910C,而且vllm-ascend里面写死了reduce-overhead模式

MindIE + Qwen 3-8B-W8A8

bash 复制代码
1. Launch the container on the host

  docker run -it -d --net=host --shm-size=16g \
    --name mindie-qwen3-8b-310p \
    -w /workspace/MindIE-LLM/examples/atb_models \
    --device=/dev/davinci0:rwm \
    --device=/dev/davinci1:rwm \
    --device=/dev/davinci2:rwm \
    --device=/dev/davinci3:rwm \
    --device=/dev/davinci_manager:rwm \
    --device=/dev/hisi_hdc:rwm \
    --device=/dev/devmm_svm:rwm \
    -v /usr/local/Ascend/driver:/usr/local/Ascend/driver:ro \
    -v /usr/local/dcmi:/usr/local/dcmi:ro \
    -v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi:ro \
    -v /usr/local/sbin:/usr/local/sbin:ro \
    -v /Users/zhaojiacheng/repos/MindIE-LLM:/workspace/MindIE-LLM \
    -v /home/s_zhaojiacheng:/home/s_zhaojiacheng \
    swr.cn-south-1.myhuaweicloud.com/ascendhub/mindie:3.0.0b2-300I-Duo-py311-openeuler24.03-lts \
    bash

  Enter the container:

  docker exec -it mindie-qwen3-8b-310p bash

  2. Prepare the environment inside the container

  cd /workspace/MindIE-LLM
  scripts/qwen3_8b_310p_w8a8sc.sh prepare-env

  3. Download the model from ModelScope

  Recommended: download directly into a normal directory, not only into the default cache.

  mkdir -p /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s

  modelscope download \
    --model Eco-Tech/Qwen3-8B-w8a8s-310 \
    --local_dir /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s

  If you already downloaded it earlier into the default cache with:

  modelscope download --model Eco-Tech/Qwen3-8B-w8a8s-310

  then flatten it into a real directory first:

  mkdir -p /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s
  cp -aL \
    /home/s_zhaojiacheng/.cache/modelscope/hub/models/Eco-Tech/Qwen3-8B-w8a8s-310/. \
    /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s/

  Check the files exist:

  ls /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s

  4. Compress W8A8S into W8A8SC

  cd /workspace/MindIE-LLM

  scripts/qwen3_8b_310p_w8a8sc.sh compress \
    --w8a8s-weight /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s \
    --w8a8sc-weight /home/s_zhaojiacheng/models/Qwen3-8B-w8a8sc

  After it finishes, check the output directory exists:

  ls /home/s_zhaojiacheng/models/Qwen3-8B-w8a8sc

  5. Start the OpenAI-compatible server

  cd /workspace/MindIE-LLM

  scripts/qwen3_8b_310p_w8a8sc.sh serve \
    --w8a8sc-weight /home/s_zhaojiacheng/models/Qwen3-8B-w8a8sc \
    --model-name qwen3-8b-w8a8sc \
    --port 1025

  This should start mindie_llm_server and expose the OpenAI-compatible endpoint on 127.0.0.1:1025.

  6. Verify the service

  List models:

  curlhttp://127.0.0.1:1025/v1/models

  Expected model id:

  qwen3-8b-w8a8sc

  Test one inference request:

  curlhttp://127.0.0.1:1025/v1/chat/completions\
    -H 'Content-Type: application/json' \
    -d '{
      "model": "qwen3-8b-w8a8sc",
      "messages": [
        {"role": "user", "content": "What is deep learning?"}
      ],
      "max_tokens": 128,
      "stream": false
    }'

  Short version

  If you want the shortest working sequence inside the container:

  cd /workspace/MindIE-LLM

  scripts/qwen3_8b_310p_w8a8sc.sh prepare-env

  modelscope download \
    --model Eco-Tech/Qwen3-8B-w8a8s-310 \
    --local_dir /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s

  scripts/qwen3_8b_310p_w8a8sc.sh compress \
    --w8a8s-weight /home/s_zhaojiacheng/models/Qwen3-8B-w8a8s \
    --w8a8sc-weight /home/s_zhaojiacheng/models/Qwen3-8B-w8a8sc

  scripts/qwen3_8b_310p_w8a8sc.sh serve \
    --w8a8sc-weight /home/s_zhaojiacheng/models/Qwen3-8B-w8a8sc \
    --model-name qwen3-8b-w8a8sc \
    --port 1025

  Then test:

  curlhttp://127.0.0.1:1025/v1/models

  One important detail: for this single-310P flow, do not try to serve Qwen3-8B-w8a8s-310 directly. The supported path is download W8A8S -> compress to W8A8SC -> serve W8A8SC.

  If you want, I can also rewrite this into one clean host-side bash script that does docker run, docker exec, download, compress, and serve end to end.
相关推荐
碧海银沙音频科技研究院2 小时前
博通BK7258原生语音识别与dump功能功能实现
人工智能·深度学习·算法
飞睿科技3 小时前
乐鑫信息推出ESP-Claw AI智能体框架,物联网进入“聊天造物”时代
人工智能·物联网·esp32·乐鑫科技·ai智能体
顾城猿4 小时前
NLP入门
人工智能·自然语言处理
独隅4 小时前
将MAE模型从PyTorch无缝迁移到TensorFlow Lite的完整实践指南
人工智能·pytorch·tensorflow
HackTorjan4 小时前
AI图像处理的核心原理:深度学习驱动的视觉特征提取与重构
图像处理·人工智能·深度学习·django·sqlite
梦梦代码精5 小时前
从工程视角拆解 BuildingAI:一个企业级开源智能体平台的架构设计与实现
人工智能·gitee·开源·github
supericeice5 小时前
复杂项目管理如何用好大模型:RAG、知识图谱与AI编排的落地框架
人工智能·知识图谱
AI机器学习算法10 小时前
深度学习模型演进:6个里程碑式CNN架构
人工智能·深度学习·cnn·大模型·ai学习路线
Ztopcloud极拓云视角10 小时前
从 OpenRouter 数据看中美 AI 调用量反转:统计口径、模型路由与多云应对方案
人工智能·阿里云·大模型·token·中美ai