安装环境cuda/cudnn/tensorRT库函数的判断
1) CUDA 相关库(cudart / cuda / cublas)
命令:
bash
ldconfig -p | grep -E 'libcudart|libcuda|libcublas' | head
输出:
text
libcudart.so.12 (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so.12
libcudart.so (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcudart.so
libcudadebugger.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudadebugger.so.1
libcuda.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcuda.so.1
libcuda.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcuda.so
libcublasLt.so.12 (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so.12
libcublasLt.so (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcublasLt.so
libcublas.so.12 (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcublas.so.12
libcublas.so (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libcublas.so
2) TensorRT 工具(trtexec)
命令:
bash
trtexec
输出:
text
bash: trtexec: command not found
3) TensorRT 运行库(nvinfer / nvonnxparser)
命令:
bash
ldconfig -p | grep -E 'libnvinfer|libnvonnxparser' | head -n 50
输出:
text
libnvonnxparser.so.10 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvonnxparser.so.10
libnvonnxparser.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvonnxparser.so
libnvinfer_plugin.so.10 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so.10
libnvinfer_plugin.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvinfer_plugin.so
libnvinfer.so.10 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
libnvinfer.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libnvinfer.so
安装cudNN
bash
sudo apt install libcudnn9-dev-cuda-12
diffusiuon节点编译
两种方法
方法1:
bash
bash build_and_clean.sh /root/.cache/ccache /opt/autoware "--packages-up-to autoware_diffusion_planner"
方法2:
bash
source /opt/ros/humble/setup.bash
export CCACHE_DIR=/root/.cache/ccache
mkdir -p $CCACHE_DIR
bash
colcon build --merge-install --install-base /opt/autoware --mixin release compile-commands ccache --packages-up-to autoware_diffusion_planner
docker_file
windows 版本
yaml
services:
autoware:
image: ghcr.io/autowarefoundation/autoware:universe-devel-cuda
network_mode: bridge
container_name: autoware
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
limits:
memory: 30g
cpus: "4"
volumes:
- "F:/autoware.tutorial_vehicle/src:/autoware/src"
- "F:/autoware.tutorial_vehicle/autoware_data:/autoware_data"
- "F:/autoware.tutorial_vehicle/autoware_map:/autoware_map"
tty: true
stdin_open: true
liunx 版本