CUDA-BEVFusion 开箱即用镜像使用指南
一、背景
为了避免繁琐的环境配置,本文提供了一个预置环境的 Docker 镜像,帮助开发者快速上手 CUDA-BEVFusion。该镜像包含了完整的训练与推理依赖,用户无需从零搭建环境即可开始模型训练、量化和部署。
二、使用须知
在使用本镜像前,请了解以下关键信息:
-
双容器设计:提供两个独立的 Docker 容器
bevfusion_training:v1:模型训练(基于 PyTorch)bevfusion_inference:v1:TensorRT 推理(基于 ONNX 和 TensorRT)
-
工程目录 :CUDA-BEVFusion 源码位于容器内的
/opt/CUDA-BEVFusion -
数据共享 :训练容器与推理容器通过宿主机挂载的
/app目录交换数据(数据集、模型权重、中间文件等) -
关键参数说明
sweeps_num:若您的输入数据仅包含关键帧(不含时序帧),请将sweeps_num设为0,否则会引入无效帧从而降低模型精度。SENSORS_OVERRIDE:当使用自定义数据集(传感器名称或输入顺序与 nuScenes 不同)时,需设置该环境变量以启用自定义数据处理逻辑。
三、测试环境与性能(RTX 3090)
3.1 nuScenes v1.0-mini 数据集测试结果
| 评估项 | mAP | car-AP | pedestrian-AP | 推理耗时(ms) |
|---|---|---|---|---|
| pytorch(基线) | 0.8551 | 0.908 | 0.955 | |
| TensorRT‑INT8 | 0.8517 | 0.906 | 0.952 | 14 |
| TensorRT‑FP16 | 0.8555 | 0.908 | 0.955 | 17 |
3.2 自定义数据集微调效果
| 评估项 | mAP | car-AP | pedestrian-AP | 推理耗时(ms) |
|---|---|---|---|---|
| pytorch(微调前) | 0.1222 | 0.803 | 0.398 | |
| pytorch(微调后) | 0.7720 | 0.972 | 0.929 | |
| TensorRT‑INT8(微调后) | 0.7697 | 0.967 | 0.922 | 14 |
| TensorRT‑FP16(微调后) | 0.7715 | 0.972 | 0.929 | 17 |
四、操作步骤
以下操作均基于 Docker 环境,请确保已安装 NVIDIA Container Toolkit。
4.1 在 nuScenes v1.0-mini 上测试(可选)
4.1.1 下载预训练模型
bash
# 百度网盘链接:https://pan.baidu.com/s/1_6IJTzKlJ8H62W5cUPiSbA?pwd=g6b4
# 下载后将 model.zip 放入共享目录(如 /app)
mv model.zip <Docker共享目录>
4.1.2 下载并准备 nuScenes mini 数据集(3.9GB)
bash
cd <Docker共享目录>
wget -O v1.0-mini.tgz https://www.nuscenes.org/data/v1.0-mini.tgz
# 创建目录并解压
mkdir nuscenes
tar -xf v1.0-mini.tgz -C nuscenes
4.1.3 训练 / 测试(PyTorch)
bash
cd <Docker共享目录>
# 启动训练容器
docker stop bevfusion_train
docker rm bevfusion_train
docker run --gpus all --shm-size=128g -itd -e NVIDIA_VISIBLE_DEVICES=all \
--name bevfusion_train --hostname bevfusion-train --privileged --net=host \
-v $PWD:/app -w /app bevfusion_training:v1 /bin/bash
docker start bevfusion_train
# 进入容器
docker exec -ti bevfusion_train bash
在容器内执行以下步骤:
bash
# 回到工程根目录
cd /opt/CUDA-BEVFusion/bevfusion
# 重置自定义数据集标志(使用 nuScenes 原生格式)
unset SENSORS_OVERRIDE
# 创建数据集软链接
rm data -rf
mkdir data
ln -s /app/nuscenes data/
# 创建数据索引
python tools/create_data.py nuscenes --root-path ./data/nuscenes \
--out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0-mini
# 测试预训练模型精度
python tools/test.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
/app/model/resnet50/bevfusion-det.pth --eval bbox
4.1.4 微调模型(可选)
bash
# 在容器内执行微调
rm -rf /app/train_nuscenes_result
torchpack dist-run -np 1 python tools/train.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--load_from /app/model/resnet50/bevfusion-det.pth --run-dir /app/train_nuscenes_result
# 将微调后的模型覆盖原模型
cp -L /app/train_my_result/latest.pth /app//model/resnet50/bevfusion-det.pth
# 可视化预测结果(生成图片序列)
rm -rf /app/vis_nuscenes_result
torchpack dist-run -np 1 python tools/visualize.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--mode pred \
--bbox-score 0.2 \
--out-dir /app/vis_nuscenes_result \
--checkpoint /app/model/resnet50/bevfusion-det.pth
查看训练曲线(可选,在容器内运行):
bash
cd /app/train_nuscenes_result
tensorboard --logdir tf_logs --host 0.0.0.0
# 浏览器访问 http://容器IP:6006
生成视频对比(数据集 vs 预测结果):
bash
cd /opt/CUDA-BEVFusion/bevfusion
# 将标注数据集按scenes顺序生成视频 nuscenes_all_scenes.mp4
python nuscenes2video.py data/
# 将上面微调后可视化保存的图片保存为视频merge.mp4
python merge_pred.py /app/vis_nuscenes_result
# 垂直拼接并添加文字标注
ffmpeg -y -i "nuscenes_all_scenes.mp4" \
-i "merge.mp4" \
-filter_complex \
"[0:v]drawtext=text='Dataset':x=10:y=10:fontsize=100:fontcolor=white[v0]; \
[1:v]drawtext=text='BevFusion Pred':x=10:y=10:fontsize=100:fontcolor=white[v1]; \
[v0][v1]vstack=2,scale=1920:1080[vout]" \
-map "[vout]" -c:v libx264 -preset fast -crf 23 /app/nuscenes_output.mp4
rm nuscenes_all_scenes.mp4 merge.mp4 -f
sweeps_num=9 的精度
bash
configs/nuscenes/default.yaml:68: sweeps_num: 9
configs/nuscenes/default.yaml:200: sweeps_num: 9
bash
mAP: 0.8544
mATE: 0.1745
mASE: 0.2072
mAOE: 0.1472
mAVE: 0.2529
mAAE: 0.2787
NDS: 0.8211
Eval time: 4.5s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.908 0.161 0.140 0.066 0.293 0.175
truck 0.759 0.189 0.138 0.046 0.169 0.385
bus 0.868 0.286 0.155 0.028 0.484 0.182
trailer 0.960 0.179 0.333 0.019 0.149 0.975
construction_vehicle 0.947 0.182 0.188 0.233 0.026 0.301
pedestrian 0.954 0.113 0.246 0.298 0.217 0.086
motorcycle 0.799 0.190 0.238 0.325 0.205 0.057
bicycle 0.541 0.198 0.286 0.266 0.480 0.069
traffic_cone 0.932 0.090 0.193 nan nan nan
barrier 0.877 0.157 0.155 0.043 nan nan
sweeps_num=0 的精度
bash
configs/nuscenes/default.yaml:68: sweeps_num: 0
configs/nuscenes/default.yaml:200: sweeps_num: 0
bash
mAP: 0.7368
mATE: 0.2303
mASE: 0.2199
mAOE: 0.5045
mAVE: 1.5011
mAAE: 0.4017
NDS: 0.6327
Eval time: 4.5s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.865 0.182 0.152 0.118 2.129 0.226
truck 0.583 0.292 0.153 0.078 1.212 0.455
bus 0.746 0.343 0.155 0.135 1.953 0.372
trailer 0.921 0.391 0.312 0.012 2.534 0.980
construction_vehicle 0.867 0.183 0.179 1.128 0.032 0.301
pedestrian 0.863 0.130 0.271 1.265 0.865 0.515
motorcycle 0.560 0.210 0.257 0.654 1.722 0.090
bicycle 0.278 0.249 0.334 1.102 1.562 0.275
traffic_cone 0.864 0.115 0.227 nan nan nan
barrier 0.820 0.210 0.157 0.049 nan nan
4.1.5 导出 ONNX 模型
注意:此步骤在训练容器内完成
bash
# 仍在训练容器内
cd /opt/CUDA-BEVFusion
unset SENSORS_OVERRIDE # 确保使用 nuScenes 默认配置
# 创建数据集软链接
rm -rf data
mkdir data
ln -s /app/nuscenes data/
# 创建模型软链接
rm -f model
ln -s /app/model .
# 导出 ONNX(FP16 和 INT8 量化模型)
./run_export.sh model/resnet50/bevfusion-det.pth
# 移动生成的文件到模型目录
rm model/resnet50int8/*.onnx -f
rm model/resnet50int8/*.pth -f
mv qat/onnx_int8/* model/resnet50int8/
mv qat/ckpt/bevfusion_ptq.pth model/resnet50int8/
rm model/resnet50/*.onnx
mv qat/onnx_fp16/*.onnx model/resnet50/
4.1.6 TensorRT 推理
启动推理容器:
bash
cd <Docker共享目录>
docker stop bevfusion_infer
docker rm bevfusion_infer
docker run --gpus all --shm-size=128g -itd -e NVIDIA_VISIBLE_DEVICES=all \
--name bevfusion_infer --hostname bevfusion-infer --privileged --net=host \
-v $PWD:/app -w /app bevfusion_inference:v1 /bin/bash
docker start bevfusion_infer
docker exec -ti bevfusion_infer bash
在推理容器内:
bash
cd /opt/CUDA-BEVFusion
unset SENSORS_OVERRIDE
# 软链接模型权重
rm -f model
ln -s /app/model .
# 构建 INT8 TensorRT engine
rm model/resnet50int8/build -rf
export DEBUG_MODEL=resnet50int8
export DEBUG_PRECISION=int8
bash tool/build_trt_engine.sh
# 构建 FP16 TensorRT engine
rm model/resnet50/build -rf
export DEBUG_MODEL=resnet50
export DEBUG_PRECISION=fp16
bash tool/build_trt_engine.sh
# 执行 INT8 推理
python tool/trt_infer.py resnet50int8 int8 /app/trt_infer_result_int8.pkl /app/dump/
# 执行 FP16 推理
python tool/trt_infer.py resnet50 fp16 /app/trt_infer_result_fp16.pkl /app/dump/
推理过程会输出类似下文的日志(示例):
bash
resnet50int8 int8 trt_infer_result_int8.pkl /app/dump/
images shape: 6, first shape: (900, 1600, 3)
points shape: (248834, 5)
gt_bboxes shape: (66, 9)
camera_intrinsics shape: (1, 6, 4, 4)
camera2lidar shape: (1, 6, 4, 4)
lidar2image shape: (1, 6, 4, 4)
img_aug_matrix shape: (1, 6, 4, 4)
前5个检测结果 (按score排序):
[[ 13.613 -9.262 -1.482 0.769 0.811 1.736 1.904 -0.044 -0.025 8. 0.809]
[ 6.15 -9.375 -1.521 2.132 0.67 0.955 1.647 -0. 0. 5. 0.778]
[ 8.213 11.588 -0.939 2.079 0.662 1.027 1.563 -0. 0. 5. 0.755]
[ -4.387 15.375 0.254 2.894 10.502 3.26 3.133 -0. 0. 1. 0.729]
[ -1.537 -15.656 -1.352 0.81 0.819 1.803 -1.532 0. -0.001 8. 0.673]]
infer time: 14.088 ms
----
resnet50 fp16 trt_infer_result_fp16.pkl /app/dump/
images shape: 6, first shape: (900, 1600, 3)
points shape: (248834, 5)
gt_bboxes shape: (66, 9)
camera_intrinsics shape: (1, 6, 4, 4)
camera2lidar shape: (1, 6, 4, 4)
lidar2image shape: (1, 6, 4, 4)
img_aug_matrix shape: (1, 6, 4, 4)
前5个检测结果 (按score排序):
[[ 6.188 -9.375 -1.551 2.12 0.646 1.011 1.632 -0. 0. 5. 0.823]
[13.613 -9.262 -1.482 0.763 0.798 1.729 1.803 -0.14 -0.02 8. 0.819]
[ 8.175 11.588 -0.962 2.073 0.642 1.05 1.553 -0. 0. 5. 0.771]
[ 6.9 9.563 -0.974 0.437 0.4 0.777 1.589 0. 0.001 9. 0.712]
[-4.387 15.338 0.264 2.829 10.139 3.318 3.131 -0. 0. 1. 0.703]]
infer time: 17.933 ms
4.1.7 计算 TensorRT 推理精度
切换回 训练容器(或另开终端进入训练容器):
bash
docker exec -ti bevfusion_train bash
cd /opt/CUDA-BEVFusion
unset SENSORS_OVERRIDE
# 评估 INT8 结果
python tool/eval_trt_infer_result.py \
--config bevfusion/configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--pred_result_pkl /app/trt_infer_result_int8.pkl
# 评估 FP16 结果
python tool/eval_trt_infer_result.py \
--config bevfusion/configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--pred_result_pkl /app/trt_infer_result_fp16.pkl
输出
bash
mAP: 0.7320
mATE: 0.2337
mASE: 0.2203
mAOE: 0.5016
mAVE: 1.4919
mAAE: 0.4157
NDS: 0.6289
Eval time: 4.6s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.865 0.182 0.151 0.121 2.126 0.226
truck 0.576 0.300 0.159 0.081 1.167 0.499
bus 0.749 0.353 0.157 0.119 1.929 0.383
trailer 0.910 0.370 0.318 0.010 2.558 0.980
construction_vehicle 0.866 0.195 0.174 1.163 0.031 0.288
pedestrian 0.860 0.132 0.273 1.267 0.866 0.555
motorcycle 0.555 0.209 0.255 0.663 1.699 0.090
bicycle 0.260 0.267 0.331 1.040 1.559 0.306
traffic_cone 0.862 0.118 0.232 nan nan nan
barrier 0.817 0.211 0.152 0.050 nan nan
----
mAP: 0.7380
mATE: 0.2310
mASE: 0.2202
mAOE: 0.5048
mAVE: 1.5032
mAAE: 0.4005
NDS: 0.6333
Eval time: 4.5s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.865 0.183 0.152 0.118 2.127 0.227
truck 0.583 0.294 0.153 0.075 1.217 0.455
bus 0.754 0.342 0.156 0.134 1.945 0.365
trailer 0.921 0.392 0.312 0.012 2.546 0.980
construction_vehicle 0.868 0.185 0.180 1.130 0.032 0.300
pedestrian 0.863 0.131 0.271 1.265 0.865 0.515
motorcycle 0.561 0.209 0.257 0.664 1.723 0.089
bicycle 0.279 0.248 0.334 1.098 1.570 0.274
traffic_cone 0.865 0.116 0.228 nan nan nan
barrier 0.820 0.211 0.158 0.049 nan nan
4.2 在自定义数据集上微调
若您的数据集结构与 nuScenes 不同(传感器名称、帧顺序等),请参考以下流程。
4.2.1 准备自定义数据集
假设您的自定义数据集已按 nuScenes 格式整理,位于 /app/my_nuscenes_data/nuscenes。
关键点 :需要设置 SENSORS_OVERRIDE=1 以启用自定义处理逻辑。
4.2.2 训练与微调
启动训练容器(同 4.1.3),然后执行:
bash
# 安装bevfusion
cd /opt/CUDA-BEVFusion/bevfusion
# 启用自定义数据集模式
export SENSORS_OVERRIDE=1
# 创建数据集软链接
rm data -rf
mkdir data
ln -s /app/my_nuscenes_data/nuscenes data/
# 创建数据索引
python tools/create_data.py nuscenes --root-path ./data/nuscenes \
--out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0-mini
# 测试微调前的精度
python tools/test.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
/app/model/resnet50/bevfusion-det.pth --eval bbox
# 微调
rm -rf /app/train_my_result
torchpack dist-run -np 1 python tools/train.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--load_from /app/model/resnet50/bevfusion-det.pth --run-dir /app/train_my_result
# 测试微调后的精度
python tools/test.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
/app/train_my_result/latest.pth --eval bbox
# 替换权值
cp -L /app/train_my_result/latest.pth /app//model/resnet50/bevfusion-det.pth
# 将检测结果保存为图片 /app/train_my_result/latest.pth
rm -rf /app/vis_my_tuned_result
torchpack dist-run -np 1 python tools/visualize.py \
configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--mode pred \
--bbox-score 0.2 \
--out-dir /app/vis_my_tuned_result \
--checkpoint /app/train_my_result/latest.pth
# 将标注数据集按scenes顺序生成视频 nuscenes_all_scenes.mp4
python nuscenes2video.py data/
# 将上面微调后可视化保存的图片保存为视频merge.mp4
python merge_pred.py /app/vis_my_tuned_result
# 将这二个视频垂直拼接
ffmpeg -y -i "nuscenes_all_scenes.mp4" \
-i "merge.mp4" \
-filter_complex \
"[0:v]drawtext=text='Dataset':x=10:y=10:fontsize=100:fontcolor=white[v0]; \
[1:v]drawtext=text='BevFusion Pred':x=10:y=10:fontsize=100:fontcolor=white[v1]; \
[v0][v1]vstack=2,scale=1920:1080[vout]" \
-map "[vout]" -c:v libx264 -preset fast -crf 23 /app/my_tuned_output.mp4
rm nuscenes_all_scenes.mp4 merge.mp4 -f
微调前
bash
mAP: 0.1222
mATE: 0.6508
mASE: 0.6033
mAOE: 1.0786
mAVE: 0.7346
mAAE: 1.0000
NDS: 0.1622
Eval time: 0.6s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.803 0.140 0.174 0.104 0.269 1.000
truck 0.002 0.386 0.206 0.441 0.053 1.000
bus 0.014 0.720 0.351 0.680 0.269 1.000
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.000 0.073 0.581 3.009 1.000 1.000
pedestrian 0.398 0.136 0.268 1.472 1.286 1.000
motorcycle 0.000 1.000 1.000 1.000 1.000 1.000
bicycle 0.000 1.000 1.000 1.000 1.000 1.000
traffic_cone 0.005 1.053 0.453 nan nan nan
barrier 0.000 1.000 1.000 1.000 nan nan
微调后
bash
mAP: 0.7695
mATE: 0.2845
mASE: 0.3188
mAOE: 0.3133
mAVE: 0.5831
mAAE: 1.0000
NDS: 0.6348
Eval time: 1.3s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.968 0.108 0.121 0.055 0.154 1.000
truck 0.989 0.098 0.111 0.025 0.126 1.000
bus 0.974 0.208 0.108 0.035 0.098 1.000
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.989 0.113 0.157 0.067 0.144 1.000
pedestrian 0.902 0.075 0.211 0.470 1.144 1.000
motorcycle 0.989 0.081 0.062 0.072 1.000 1.000
bicycle 0.000 1.000 1.000 1.000 1.000 1.000
traffic_cone 0.989 0.066 0.229 nan nan nan
barrier 0.896 0.096 0.188 0.096 nan nan
4.2.3 导出 ONNX 并执行 TensorRT 推理
在训练容器中导出 ONNX(需设置 SENSORS_OVERRIDE=1):
bash
# 准备导出ONNX模型
cd /opt/CUDA-BEVFusion
export SENSORS_OVERRIDE=1
# 创建数据集软链接
rm -rf data
mkdir data
ln -s /app/my_nuscenes_data/nuscenes data/
# 创建模型软链接
rm -f model
ln -s /app/model .
# 导出ONNX模型
./run_export.sh /app/model/resnet50/bevfusion-det.pth
# 将ONNX模型移动到共享目录
rm model/resnet50int8/*.onnx -f
rm model/resnet50int8/*.pth -f
mv qat/onnx_int8/* model/resnet50int8/
mv qat/ckpt/bevfusion_ptq.pth model/resnet50int8/
rm model/resnet50/*.onnx
mv qat/onnx_fp16/*.onnx model/resnet50/
进入推理容器(如前述),同样设置 export SENSORS_OVERRIDE=1,然后构建 engine 并执行推理。
bash
docker exec -ti bevfusion_infer bash
cd /opt/CUDA-BEVFusion
export SENSORS_OVERRIDE=1
# 创建模型软链接
rm -f model
ln -s /app/model .
# 编译trt engine
rm model/resnet50int8/build -rf
export DEBUG_MODEL=resnet50int8
export DEBUG_PRECISION=int8
bash tool/build_trt_engine.sh
rm model/resnet50/build -rf
export DEBUG_MODEL=resnet50
export DEBUG_PRECISION=fp16
bash tool/build_trt_engine.sh
# 删除旧的推理结果
rm /app/trt_infer_result_*
# int8推理
python tool/trt_infer.py resnet50int8 int8 /app/trt_infer_result_int8.pkl /app/dump/
# fp16推理
python tool/trt_infer.py resnet50 fp16 /app/trt_infer_result_fp16.pkl /app/dump/
最后在训练容器中评估精度(需设置 SENSORS_OVERRIDE=1):
bash
docker exec -ti bevfusion_train bash
cd /opt/CUDA-BEVFusion
export SENSORS_OVERRIDE=1
python tool/eval_trt_infer_result.py \
--config bevfusion/configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--pred_result_pkl /app/trt_infer_result_int8.pkl
python tool/eval_trt_infer_result.py \
--config bevfusion/configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml \
--pred_result_pkl /app/trt_infer_result_fp16.pkl
输出
bash
mAP: 0.7650
mATE: 0.3040
mASE: 0.3282
mAOE: 0.3727
mAVE: 0.5709
mAAE: 1.0000
NDS: 0.6249
Eval time: 1.3s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.959 0.116 0.138 0.052 0.157 1.000
truck 0.978 0.099 0.132 0.050 0.086 1.000
bus 0.990 0.223 0.127 0.041 0.122 1.000
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.989 0.136 0.113 0.139 0.017 1.000
pedestrian 0.865 0.083 0.225 0.616 1.185 1.000
motorcycle 0.989 0.189 0.140 0.302 1.000 1.000
bicycle 0.000 1.000 1.000 1.000 1.000 1.000
traffic_cone 0.989 0.094 0.200 nan nan nan
barrier 0.891 0.100 0.207 0.153 nan nan
----
mAP: 0.7675
mATE: 0.2846
mASE: 0.3190
mAOE: 0.3112
mAVE: 0.5821
mAAE: 1.0000
NDS: 0.6341
Eval time: 1.3s
Per-class results:
Object Class AP ATE ASE AOE AVE AAE
car 0.968 0.109 0.121 0.055 0.154 1.000
truck 0.976 0.098 0.112 0.025 0.125 1.000
bus 0.968 0.201 0.108 0.036 0.098 1.000
trailer 0.000 1.000 1.000 1.000 1.000 1.000
construction_vehicle 0.989 0.111 0.159 0.056 0.144 1.000
pedestrian 0.900 0.077 0.211 0.467 1.136 1.000
motorcycle 0.989 0.083 0.060 0.067 1.000 1.000
bicycle 0.000 1.000 1.000 1.000 1.000 1.000
traffic_cone 0.989 0.070 0.232 nan nan nan
barrier 0.896 0.097 0.186 0.095 nan nan
4.2.4 运行TRT演示DEMO
bash
# 进入推理容器
cd /opt/CUDA-BEVFusion
export SENSORS_OVERRIDE=1
# 创建数据集软链接
rm -rf data
mkdir data
ln -s /app/my_nuscenes_data/nuscenes data/
python tool/infer_demo.py --dataset_path ./data/nuscenes/ \
--scene scene-0000 --sample_idx 0 --score_thresh 0.05 \
--model resnet50 --precision fp16 --output /app/out.jpg