在SCNet使用异构海光DCU 部署文心21B大模型报错HIP out of memory(未调通)

使用命令

vllm serve baidu/ERNIE-4.5-21B-A3B-Base-PT --tensor-parallel-size 4 --trust-remote-code --block-size 8 --max-model-len 4096 --gpu-memory-utilization 0.85 --dtype float --kv_cache_dtype fp8

报错:

torch.OutOfMemoryError: HIP out of memory. Tried to allocate 16.00 MiB. GPU 0 has a total capacity of 63.98 GiB of which 0 bytes is free. Of the allocated memory 63.61 GiB is allocated by PyTorch, and 896.00 KiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_HIP_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

现在用这条命令

复制代码
vllm serve baidu/ERNIE-4.5-21B-A3B-Base-PT --gpu-memory-utilization 0.92  --tensor-parallel-size 2 --max-num-seqs 32 --max-model-len 2000  --tensor-parallel-size 2  --dtype float16

报错

复制代码
(VllmWorkerProcess pid=31673) INFO 10-15 15:16:56 [model_runner.py:1156] Model loading took 40.3319 GiB and 63.837660 seconds
INFO 10-15 15:16:56 [model_runner.py:1156] Model loading took 40.3348 GiB and 63.986810 seconds
(VllmWorkerProcess pid=31673) /usr/local/lib/python3.10/dist-packages/vllm/attention/backends/rocm_flash_attn.py:1025: UserWarning: 1Torch was not compiled with memory efficient attention. (Triggered internally at /home/pytorch/aten/src/ATen/native/transformers/hip/sdp_utils.cpp:627.)
(VllmWorkerProcess pid=31673)   sub_out = torch.nn.functional.scaled_dot_product_attention(
/usr/local/lib/python3.10/dist-packages/vllm/attention/backends/rocm_flash_attn.py:1025: UserWarning: 1Torch was not compiled with memory efficient attention. (Triggered internally at /home/pytorch/aten/src/ATen/native/transformers/hip/sdp_utils.cpp:627.)
  sub_out = torch.nn.functional.scaled_dot_product_attention(
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks.
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238] Traceback (most recent call last):
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_worker_utils.py", line 232, in _run_worker_process
ERROR 10-15 15:17:04 [engine.py:453] expected mat1 and mat2 to have the same dtype, but got: float != c10::Half
ERROR 10-15 15:17:04 [engine.py:453] Traceback (most recent call last):
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 441, in run_mp_engine
ERROR 10-15 15:17:04 [engine.py:453]     engine = MQLLMEngine.from_vllm_config(
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 133, in from_vllm_config
ERROR 10-15 15:17:04 [engine.py:453]     return cls(
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 87, in __init__
ERROR 10-15 15:17:04 [engine.py:453]     self.engine = LLMEngine(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 280, in __init__
ERROR 10-15 15:17:04 [engine.py:453]     self._initialize_kv_caches()
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 427, in _initialize_kv_caches
ERROR 10-15 15:17:04 [engine.py:453]     self.model_executor.determine_num_available_blocks())
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 104, in determine_num_available_blocks
ERROR 10-15 15:17:04 [engine.py:453]     results = self.collective_rpc("determine_num_available_blocks")
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 334, in collective_rpc
ERROR 10-15 15:17:04 [engine.py:453]     return self._run_workers(method, *args, **(kwargs or {}))
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
ERROR 10-15 15:17:04 [engine.py:453]     driver_worker_output = run_method(self.driver_worker, sent_method,
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2595, in run_method
ERROR 10-15 15:17:04 [engine.py:453]     return func(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 10-15 15:17:04 [engine.py:453]     return func(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 249, in determine_num_available_blocks
ERROR 10-15 15:17:04 [engine.py:453]     self.model_runner.profile_run()
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 10-15 15:17:04 [engine.py:453]     return func(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1253, in profile_run
ERROR 10-15 15:17:04 [engine.py:453]     self._dummy_run(max_num_batched_tokens, max_num_seqs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1379, in _dummy_run
ERROR 10-15 15:17:04 [engine.py:453]     self.execute_model(model_input, kv_caches, intermediate_tensors)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 10-15 15:17:04 [engine.py:453]     return func(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1796, in execute_model
ERROR 10-15 15:17:04 [engine.py:453]     hidden_or_intermediate_states = model_executable(
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/compilation/decorators.py", line 172, in __call__
ERROR 10-15 15:17:04 [engine.py:453]     return self.forward(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/transformers.py", line 422, in forward
ERROR 10-15 15:17:04 [engine.py:453]     model_output = self.model(input_ids, positions, intermediate_tensors,
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return self._call_impl(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return forward_call(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/transformers.py", line 329, in forward
ERROR 10-15 15:17:04 [engine.py:453]     hidden_states = self.model(
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return self._call_impl(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return forward_call(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py", line 1064, in wrapper
ERROR 10-15 15:17:04 [engine.py:453]     outputs = func(self, *args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 558, in forward
ERROR 10-15 15:17:04 [engine.py:453]     hidden_states = decoder_layer(
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_layers.py", line 94, in __call__
ERROR 10-15 15:17:04 [engine.py:453]     return super().__call__(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return self._call_impl(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return forward_call(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
ERROR 10-15 15:17:04 [engine.py:453]     return func(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 459, in forward
ERROR 10-15 15:17:04 [engine.py:453]     hidden_states = self.mlp(hidden_states)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return self._call_impl(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return forward_call(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 344, in forward
ERROR 10-15 15:17:04 [engine.py:453]     router_logits = self.gate(hidden_states.float())
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return self._call_impl(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
ERROR 10-15 15:17:04 [engine.py:453]     return forward_call(*args, **kwargs)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 352, in forward
ERROR 10-15 15:17:04 [engine.py:453]     output = self.quant_method.apply(self, x, bias)
ERROR 10-15 15:17:04 [engine.py:453]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 222, in apply
ERROR 10-15 15:17:04 [engine.py:453]     return dispatch_unquantized_gemm()(x, layer.weight, bias)
ERROR 10-15 15:17:04 [engine.py:453] RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::Half
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     output = run_method(worker, method, args, kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2595, in run_method
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return func(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return func(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 249, in determine_num_available_blocks
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     self.model_runner.profile_run()
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return func(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1253, in profile_run
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     self._dummy_run(max_num_batched_tokens, max_num_seqs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1379, in _dummy_run
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     self.execute_model(model_input, kv_caches, intermediate_tensors)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return func(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1796, in execute_model
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     hidden_or_intermediate_states = model_executable(
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/compilation/decorators.py", line 172, in __call__
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return self.forward(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/transformers.py", line 422, in forward
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     model_output = self.model(input_ids, positions, intermediate_tensors,
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/transformers.py", line 329, in forward
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     hidden_states = self.model(
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py", line 1064, in wrapper
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     outputs = func(self, *args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 558, in forward
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     hidden_states = decoder_layer(
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_layers.py", line 94, in __call__
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return super().__call__(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return func(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 459, in forward
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     hidden_states = self.mlp(hidden_states)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 344, in forward
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     router_logits = self.gate(hidden_states.float())
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 352, in forward
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     output = self.quant_method.apply(self, x, bias)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]   File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 222, in apply
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238]     return dispatch_unquantized_gemm()(x, layer.weight, bias)
(VllmWorkerProcess pid=31673) ERROR 10-15 15:17:04 [multiproc_worker_utils.py:238] RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::Half
INFO 10-15 15:17:04 [multiproc_worker_utils.py:124] Killing local vLLM worker processes
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 455, in run_mp_engine
    raise e
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 441, in run_mp_engine
    engine = MQLLMEngine.from_vllm_config(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 133, in from_vllm_config
    return cls(
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/multiprocessing/engine.py", line 87, in __init__
    self.engine = LLMEngine(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 280, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 427, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 104, in determine_num_available_blocks
    results = self.collective_rpc("determine_num_available_blocks")
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 334, in collective_rpc
    return self._run_workers(method, *args, **(kwargs or {}))
  File "/usr/local/lib/python3.10/dist-packages/vllm/executor/mp_distributed_executor.py", line 185, in _run_workers
    driver_worker_output = run_method(self.driver_worker, sent_method,
  File "/usr/local/lib/python3.10/dist-packages/vllm/utils.py", line 2595, in run_method
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 249, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1253, in profile_run
    self._dummy_run(max_num_batched_tokens, max_num_seqs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1379, in _dummy_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1796, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/usr/local/lib/python3.10/dist-packages/vllm/compilation/decorators.py", line 172, in __call__
    return self.forward(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/transformers.py", line 422, in forward
    model_output = self.model(input_ids, positions, intermediate_tensors,
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/transformers.py", line 329, in forward
    hidden_states = self.model(
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/utils/generic.py", line 1064, in wrapper
    outputs = func(self, *args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 558, in forward
    hidden_states = decoder_layer(
  File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_layers.py", line 94, in __call__
    return super().__call__(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/utils/deprecation.py", line 172, in wrapped_func
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 459, in forward
    hidden_states = self.mlp(hidden_states)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/transformers/models/ernie4_5_moe/modeling_ernie4_5_moe.py", line 344, in forward
    router_logits = self.gate(hidden_states.float())
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 352, in forward
    output = self.quant_method.apply(self, x, bias)
  File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/linear.py", line 222, in apply
    return dispatch_unquantized_gemm()(x, layer.weight, bias)
RuntimeError: expected mat1 and mat2 to have the same dtype, but got: float != c10::Half
Traceback (most recent call last):
  File "/usr/local/bin/vllm", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/cli/main.py", line 53, in main
    args.dispatch_function(args)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/cli/serve.py", line 27, in cmd
    uvloop.run(run_server(args))
  File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 82, in run
    return loop.run_until_complete(wrapper())
  File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
  File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 61, in wrapper
    return await main
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 1078, in run_server
    async with build_async_engine_client(args) as engine_client:
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 146, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
    return await anext(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/api_server.py", line 269, in build_async_engine_client_from_engine_args
    raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.
root@notebook-1978259446016311297-ac7sc1ejvp-24335:/# /usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
/usr/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
 
root@notebook-1978259446016311297-ac7sc1ejvp-24335:/# echo $HIP_VISIBLE_DEVICES

改成bfloat试试

复制代码
vllm serve baidu/ERNIE-4.5-21B-A3B-Base-PT --gpu-memory-utilization 0.92  --tensor-parallel-size 2 --max-num-seqs 32 --max-model-len 2000  --tensor-parallel-size 2  --dtype bfloat16

现在出来了新的报错:

(VllmWorkerProcess pid=2441) INFO 10-15 15:23:03 [model_runner.py:1156] Model loading took 40.3319 GiB and 24.727362 seconds

/usr/local/lib/python3.10/dist-packages/vllm/attention/backends/rocm_flash_attn.py:1025: UserWarning: 1Torch was not compiled with memory efficient attention. (Triggered internally at /home/pytorch/aten/src/ATen/native/transformers/hip/sdp_utils.cpp:627.)

sub_out = torch.nn.functional.scaled_dot_product_attention(

(VllmWorkerProcess pid=2441) /usr/local/lib/python3.10/dist-packages/vllm/attention/backends/rocm_flash_attn.py:1025: UserWarning: 1Torch was not compiled with memory efficient attention. (Triggered internally at /home/pytorch/aten/src/ATen/native/transformers/hip/sdp_utils.cpp:627.)

用deepseek 7b模型试试

复制代码
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-7B  --dtype float16  --trust-remote-code --gpu-memory-utilization 0.98 --tensor-parallel-size 2  --max-num-seqs 32 --max-model-len 40000

你看看,人家这就好好的啊

复制代码
INFO 10-15 15:38:13 [serving_completion.py:61] Using default completion sampling params from model: {'temperature': 0.6, 'top_p': 0.95}
INFO 10-15 15:38:13 [api_server.py:1090] Starting vLLM API server on http://0.0.0.0:8000
INFO 10-15 15:38:13 [launcher.py:28] Available routes are:
INFO 10-15 15:38:13 [launcher.py:36] Route: /openapi.json, Methods: GET, HEAD

显存占到90% ,这样看,是不是2卡不行,需要4卡啊,问题是我四卡也没跑成啊!

没搞定,1卡 2卡 4卡都没跑通....

相关推荐
yzx9910136 分钟前
WorkBuddy 使用指南:解锁几大核心功能,提升工作效率
人工智能·python
蛐蛐蛐6 分钟前
在昇腾310P推理服务器上安装CANN和PyTorch
人工智能·pytorch·python·npu
科技林总7 分钟前
如何安全地使用龙虾[特殊字符]
人工智能·安全
szxinmai主板定制专家10 分钟前
基于 STM32 + FPGA 船舶电站控制器设计与实现
arm开发·人工智能·stm32·嵌入式硬件·fpga开发·架构
lucky_syq14 分钟前
Mac电脑部署OpenClaw保姆级教程(2026最新版)
人工智能·macos·开源·电脑·openclaw
猿小猴子19 分钟前
主流 AI IDE 之一的 华为云码道「CodeArts」 介绍
ide·人工智能·ai·华为云
Xpower 1724 分钟前
OpenClaw实战:从零开发电商小程序(2)
人工智能·语言模型·小程序·gateway
PNP Robotics24 分钟前
PNP机器人亮相第二届机器人灵巧手国际创新大会
人工智能·学习·机器人·开源
凤年徐24 分钟前
保姆级教程:从零搭建AI系统权限控制系统
人工智能
( ˶˙⚇˙˶ )୨⚑︎26 分钟前
深度学习与机器学习如何选择?
人工智能·深度学习·机器学习