阿里云A10推理qwen

硬件配置

bash 复制代码
vCPU:32核
内存:188 GiB
宽带:5 Mbps
GPU:NVIDIA A10 24G

cuda 安装

bash 复制代码
wget https://developer.download.nvidia.com/compute/cuda/12.1.0/local_installers/cuda-repo-rhel7-12-1-local-12.1.0_530.30.02-1.x86_64.rpm
sudo rpm -i cuda-repo-rhel7-12-1-local-12.1.0_530.30.02-1.x86_64.rpm
sudo yum clean all
sudo yum -y install nvidia-driver-latest-dkmssudo yum -y install cuda


#cudnn
wget https://developer.download.nvidia.com/compute/cudnn/9.0.0/local_installers/cudnn-local-repo-rhel7-9.0.0-1.0-1.x86_64.rpm
sudo rpm -i cudnn-local-repo-rhel7-9.0.0-1.0-1.x86_64.rpm
sudo yum clean all
sudo yum -y install cudnn

Anconda

bash 复制代码
chmod +xwr Anaconda3-2022.10-Linux-x86_64.sh
./Anaconda3-2022.10-Linux-x86_64.sh
Base: Python=3.9

torch

bash 复制代码
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia

env_test.py

python 复制代码
import torch # 如果pytorch安装成功即可导入
print(torch.cuda.is_available()) # 查看CUDA是否可用
print(torch.cuda.device_count()) # 查看可用的CUDA数量
print(torch.version.cuda) # 查看CUDA的版本号

bash 复制代码
pip install transformers==4.32.0 accelerate tiktoken einops scipy transformers_stream_generator==0.0.4 peft deepspeed
git clone https://github.com/Dao-AILab/flash-attention 
cd flash-attention && pip install .
pip install csrc/layer_norm
pip install csrc/rotary
pip install modelscope

问题:

1、subprocess.calledprocesserror: command '['which', 'g++']' returned non-zero exit status 1.

解决:

bash 复制代码
yum install make automake gcc gcc-c++ kernel-devel
yum group install "Development Tools" "Development Libraries"

2、RuntimeError: Error compiling objects for extension

解决:Pytroch和cuda不匹配,重新安装对应的cuda或者pytorch

3、nvidia-smi :Failed to initialize NVML: Driver/library version mismatch

解决:

bash 复制代码
yum remove nvidia-*
#重装cuda12.1

4、WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.

内存不够:

test:

python 复制代码
from modelscope import AutoModelForCausalLM, AutoTokenizer
from modelscope import GenerationConfig

# Note: The default behavior now has injection attack prevention off.
#trust_remote_code=True 表示你信任远程的预训练模型,愿意运行其中的代码
tokenizer = AutoTokenizer.from_pretrained("qwen/Qwen-14B", trust_remote_code=True)

# use bf16
# model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-14B", device_map="auto", trust_remote_code=True, bf16=True).eval()
# use fp16
# model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-14B", device_map="auto", trust_remote_code=True, fp16=True).eval()
# use cpu only
# model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-14B", device_map="cpu", trust_remote_code=True).eval()
# use auto mode, automatically select precision based on the device.
model = AutoModelForCausalLM.from_pretrained("qwen/Qwen-14B", device_map="auto", trust_remote_code=True).eval()

# Specify hyperparameters for generation. But if you use transformers>=4.32.0, there is no need to do this.
# model.generation_config = GenerationConfig.from_pretrained("Qwen/Qwen-14B", trust_remote_code=True)

inputs = tokenizer('蒙古国的首都是乌兰巴托(Ulaanbaatar)\n冰岛的首都是雷克雅未克(Reykjavik)\n埃塞俄比亚的首都是', return_tensors='pt')
inputs = inputs.to(model.device)
pred = model.generate(**inputs)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
# 蒙古国的首都是乌兰巴托(Ulaanbaatar)\n冰岛的首都是雷克雅未克(Reykjavik)\n埃塞俄比亚的首都是亚的斯亚贝巴(Addis Ababa)...
相关推荐
段ヤシ.11 小时前
银河麒麟(内核CentOS8)安装rbenv、ruby2.6.5和rails5.2.6
linux·centos·银河麒麟·rbenv·ruby2.6.5·rails 5.2.6
深夜情感老师13 小时前
centos离线安装ssh
linux·centos·ssh
我的作业错错错13 小时前
搭建私人网站
服务器·阿里云·私人网站
朴拙数科16 小时前
艺术字体AI生成阿里云WordArt锦书、通义万相、SiliconFlow、Pillow+OpenCV本地生成艺术字体
人工智能·阿里云·pillow
中云时代-防御可测试-小余16 小时前
高防IP是如何防护DDoS攻击和CC攻击的
运维·服务器·tcp/ip·安全·阿里云·ddos·宽度优先
啊吧怪不啊吧21 小时前
Linux常见指令介绍下(入门级)
linux·开发语言·centos
李菠菜1 天前
CentOS系统指定版本Docker与Docker-Compose在线安装教程
docker·容器·centos
柳如烟@1 天前
CentOS 7上Memcached的安装、配置及高可用架构搭建
架构·centos·memcached
Serverless社区1 天前
MCP云托管最优解,揭秘国内最大MCP中文社区背后的运行时
阿里云·云原生·serverless·函数计算
小鑫仔_x1 天前
使用 VMware 安装一台 Linux 系统之Centos
linux·运维·centos