【Bug】RuntimeError: Engine loop has died

目录

报错前置条件

使用vllm启动qwen2.5-32b-instruct模型后发生的报错

GPU是GeForce RTX 4090 Laptop GPU

系统是Windows 11

运行系统是WSL2-Ubuntu22.04

报错内容

bash 复制代码
INFO 10-22 22:29:31 engine.py:290] Added request chat-993cbe95e73d4a1db5d1e89e433f727a.
ERROR 10-22 22:29:32 client.py:250] RuntimeError('Engine loop has died')
ERROR 10-22 22:29:32 client.py:250] Traceback (most recent call last):
ERROR 10-22 22:29:32 client.py:250]   File "/home/ai/miniconda3/lib/python3.10/site-packages/vllm/engine/multiprocessing/client.py", line 150, in run_heartbeat_loop
ERROR 10-22 22:29:32 client.py:250]     await self._check_success(
ERROR 10-22 22:29:32 client.py:250]   File "/home/ai/miniconda3/lib/python3.10/site-packages/vllm/engine/multiprocessing/client.py", line 314, in _check_success
ERROR 10-22 22:29:32 client.py:250]     raise response
ERROR 10-22 22:29:32 client.py:250] RuntimeError: Engine loop has died
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/responses.py", line 259, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/responses.py", line 255, in wrap
    await func()
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/responses.py", line 232, in listen_for_disconnect
    message = await receive()
  File "/home/ai/miniconda3/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 555, in receive
    await self.message_event.wait()
  File "/home/ai/miniconda3/lib/python3.10/asyncio/locks.py", line 214, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f385017b9d0

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ai/miniconda3/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/home/ai/miniconda3/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
    return await self.app(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/applications.py", line 113, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
    raise exc
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
    await self.app(scope, receive, _send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__
    await self.app(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 715, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 735, in app
    await route.handle(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
    await self.app(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
    raise exc
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
    await app(scope, receive, sender)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 74, in app
    await response(scope, receive, send)
  File "/home/ai/miniconda3/lib/python3.10/site-packages/starlette/responses.py", line 252, in __call__
    async with anyio.create_task_group() as task_group:
  File "/home/ai/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 763, in __aexit__
    raise BaseExceptionGroup(
exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)

解决方案

判断是内存不足导致

bash 复制代码
d$ free -h
               total        used        free      shared  buff/cache   available
Mem:            15Gi       6.9Gi       8.2Gi        80Mi       435Mi       8.2Gi
Swap:          4.0Gi       4.0Gi       0.0Ki

从输出可以看到,系统总内存为 15GB,目前使用了约 6.9GB,剩余约 8.2GB 可用

交换空间(Swap)总共为 4GB,目前已全部使用,且没有可用的交换空间。

如果交换空间不足,会严重影响系统性能

要将交换空间设置为与你的物理内存相同的大小(15GB),可以按照以下步骤操作:

  1. 创建一个新的交换文件

    bash 复制代码
    sudo fallocate -l 15G /swapfile
  2. 设置正确的权限

    bash 复制代码
    sudo chmod 600 /swapfile
  3. 将文件设置为交换空间

    bash 复制代码
    sudo mkswap /swapfile
  4. 启用交换文件

    bash 复制代码
    sudo swapon /swapfile
  5. 确认交换空间已启用

    bash 复制代码
    free -h
  6. 要使更改永久生效 ,请编辑 /etc/fstab 文件,添加以下行:

    bash 复制代码
    sudo vim /etc/fstab
    /swapfile swap swap defaults 0 0
    :wq

这样,就能将交换空间设置为 15GB,性能完全发挥

如果/etc/fstab编辑后不起作用,可以将前面5个步骤的命令写入~/.bashrc

相关推荐
小妖66614 天前
wsl2 不能联网
wsl2
SizeTheMoment15 天前
List介绍
1024程序员节
小胡说人工智能17 天前
深度剖析:Dify+Sanic+Vue+ECharts 搭建 Text2SQL 项目 sanic-web 的 Debug 实战
人工智能·python·llm·text2sql·dify·vllm·ollama
开利网络17 天前
产业互联网+三融战略:重构企业增长密码
大数据·运维·服务器·人工智能·重构·1024程序员节
奔跑中的小象21 天前
基于 nvitop+Prometheus+Grafana 的物理资源与 VLLM 引擎服务监控方案
grafana·prometheus·vllm·nvitop
wei_shuo24 天前
从数据中台到数据飞轮:实现数据驱动的升级之路
1024程序员节·数据飞轮
为啥全要学1 个月前
vLLM部署Qwen2-7B模型推理
python·langchain·vllm
Nicolas8931 个月前
【大模型实战篇】华为信创环境采用vllm部署QwQ-32B模型
华为·信创·模型部署·昇腾·ascend·vllm·模型推理
YIBO04081 个月前
WSL2下Docker desktop的Cadvisor容器监控
运维·docker·容器·wsl·wsl2
玖剹1 个月前
矩阵区域和 --- 前缀和
数据结构·c++·算法·leetcode·矩阵·动态规划·1024程序员节