无头服务器 + Vulkan + Docker 问题

为了解决docker内 Vulkan 无法识别 GPU

vulkaninfo 显示 llvmpipe ,只支持cpu渲染的问题

参考:

一、第一篇issues

https://github.com/NVIDIA/nvidia-container-toolkit/issues/1472

复制代码
nvidia-ctk cdi generate 2>nvidia-ctk_cdi_generate.log > nvidia.yaml

二、第二篇issues

https://github.com/NVIDIA/nvidia-container-toolkit/issues/191#issuecomment-2022154630

I recently was encountering similar problems (see my reply to issue 16 where I list details of my environment) for a headless (i.e. no wayland, no x11) vulkan application running in my organization's internal (openstack powered) cloud. Based on the snippets above, I assume some of you, like me, are not using a GUI either. Here is what I learned:

  • --runtime=nvidia should be combined with --gpus all in order for the vulkan ICDs to be mounted from the host with recent versions of the nvidia-container-toolkit. Without it, you can see from docker inspect -f '{``{ .HostConfig.Runtime }}' that runc is being used instead of the nvidia runtime. For some reason, this isn't needed for nvidia-smi to work, possibly due to hooks(?)
  • For headless, we should use EGL instead of GLX implementation since there is no X11
  • The stack deployed by the vulkan-tools package in Ubuntu 22.04 only recognizes the deprecated VK_ICD_FILENAMES environment variable when setting paths to an ICD
  • The path to the needed ICD (/usr/share/glvnd/egl_vendor.d/10_nvidia.json) isn't automatically found by this version's vulkan loader
  • XDG_RUNTIME_DIR and DISPLAY errors can be ignored because we are not using X11
  • The non-default graphics capability is needed
  • You do not need the cuda or the (abandoned?) vulkan nvidia container images. You can get this functionality out of the base Ubuntu 22.04 docker image (and likely others) by installing the (equivalent) vulkan-tools package and setting environment variables at container launch needed by nvidia-container-toolkit (NVIDIA_DRIVER_CAPABILITIES) and the vulkan loader (VK_ICD_FILENAMES).

Example Dockerfile using a headless, third-party sample program that will render a ppm file on the gpu using vulkan as a non-root, limited user (luser). The use of a non-root user is a preference; not required. If you are quick, you can see /home/luser/bin/renderheadless running on the gpu via nvidia-smi on the host:

复制代码
$ cat Dockerfile
FROM ubuntu:22.04 AS vulkan-sample-dev

ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y gcc g++ make cmake libvulkan-dev libglm-dev curl unzip && apt-get clean

RUN useradd luser
USER luser
WORKDIR /home/luser
RUN curl -L -o master.zip https://github.com/SaschaWillems/Vulkan/archive/refs/heads/master.zip && unzip master.zip && rm master.zip
RUN cmake -DUSE_HEADLESS=ON Vulkan-master && \
    make renderheadless

FROM ubuntu:22.04 AS vulkan-sample-run

ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y vulkan-tools && apt-get clean

ENV VK_ICD_FILENAMES=/usr/share/glvnd/egl_vendor.d/10_nvidia.json 
ENV NVIDIA_DRIVER_CAPABILITIES=graphics

RUN useradd luser
COPY --chown=luser:luser --from=vulkan-sample-dev /home/luser/bin/renderheadless /home/luser/bin/renderheadless
COPY --chown=luser:luser --from=vulkan-sample-dev /home/luser/Vulkan-master/shaders/glsl/renderheadless/ /home/luser/Vulkan-master/shaders/glsl/renderheadless/
USER luser
WORKDIR /home/luser
CMD [ "/home/luser/bin/renderheadless" ]

$ docker build --target vulkan-sample-run -t localhost:vulkan-sample-run .
...
$ docker run --runtime=nvidia --gpus all --name vulkan localhost:vulkan-sample-run
Running headless rendering example
GPU: NVIDIA GeForce GTX 1650 with Max-Q Design
Framebuffer image saved to headless.ppm
Finished. Press enter to terminate...
...
$ docker cp vulkan:/home/luser/headless.ppm .
...
$ eog headless.ppm
...
$ docker run --runtime=nvidia --gpus all --rm localhost:vulkan-sample-run vulkaninfo --summary | grep deviceName
...
	deviceName         = NVIDIA GeForce GTX 1650 with Max-Q Design

三、cuda镜像

参考:

https://blog.csdn.net/FL1623863129/article/details/132275060

四、运行命令

1、宿主机执行

生成 NVIDIA Container Device Interface (CDI) 配置文件

复制代码
nvidia-ctk cdi generate 2>nvidia-ctk_cdi_generate.log > nvidia.yaml

2、拉取镜像

复制代码
docker pull nvcr.io/nvidia/cuda:12.4.0-runtime-ubuntu22.04

3、启动容器

只有一个gpu则改为gpus all,然后删掉所有--device

复制代码
docker run --name=airsim-env \
      --hostname=airsim-env \
      --mac-address=1g:ad:9e:c5:a2:b8 \
      --network=bridge \
      --runtime=nvidia \
      --shm-size=200g \
      --gpus '"device=3"' \
      --device /dev/nvidia3 \
      --device /dev/nvidiactl \
      --device /dev/nvidia-uvm \
      --device /dev/nvidia-uvm-tools \
      --ipc=host \
      --pid=host \
      --privileged \
      -e NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics,display \
      -e VK_ICD_FILENAMES=/usr/share/glvnd/egl_vendor.d/10_nvidia.json \
      -e XDG_RUNTIME_DIR=/tmp \
      -t \
      nvcr.io/nvidia/cuda:12.4.0-runtime-ubuntu22.04 \
      bash

解释:

--gpus all与--runtime=nvidia同时使用以支持挂载vulkan ICD

-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics,display 以启用图形能力

-e VK_ICD_FILENAMES=/usr/share/glvnd/egl_vendor.d/10_nvidia.json 指定Vulkan ICD

-e XDG_RUNTIME_DIR=/tmp 设置运行时目录

4、进入容器

复制代码
docker exec -it airsim-env bash

5、安装依赖

安装 Vulkan 和图形依赖

复制代码
apt update
  apt install -y \
      vulkan-tools \
      libgl1-mesa-glx \
      libgl1-mesa-dri \
      libegl1 \
      libxext6 \
      xvfb

6、验证

验证 Vulkan 是否识别 GPU

复制代码
vulkaninfo --summary | grep -E "(deviceName|driverName)"

成功识别到GPU

相关推荐
草木红1 分钟前
Docker 和 portainer 安装
运维·docker·容器·portainer
weixin_462446237 分钟前
Docker 容器因配置错误无法启动的恢复方法(实战教程)
docker·无法启动
伞啊伞9 分钟前
k8s(三)操作管理
云原生·容器·kubernetes
云飞云共享云桌面13 分钟前
昆山精密机械工厂研发部门10个SolidWorks如何共享一台服务器来进行设计办公
运维·服务器·网络·人工智能·电脑
羑悻的小杀马特37 分钟前
【Linux篇章】穿越网络迷雾:揭开 HTTP 应用层协议的终极奥秘!从请求响应到实战编程,从静态网页到动态交互,一文带你全面吃透并征服 HTTP 协议,打造属于你的 Web 通信利刃!
linux·运维·网络·http·操作系统·网络通信
网安CILLE40 分钟前
Linux 命令大全(网络安全常用)
linux·运维·服务器·网络安全
x10n941 分钟前
理解K8s动态准入控制器-基于Admission Webhook实现Sidecar自动注入检验等
云原生·容器·kubernetes·k8s准入控制器
vivo互联网技术44 分钟前
智取流量,效赢增长-拨测和融合流量管理业务赋能实践
运维
allway21 小时前
Understanding and Resolving StrictModes Issues in SSHD Configuration
运维·服务器
米高梅狮子1 小时前
7. Linux RAID 存储技术
linux·运维·服务器