- 新建任意文件夹,将 D:\install\tensorRT\TensorRT-8.6.1.6\samples\sampleOnnxMNIST 下面的 sampleOnnxMNIST.cpp 文件复制进来,同时 D:\install\tensorRT\TensorRT-8.6.1.6\samples\sampleOnnxMNIST 下面的 sample_onnx_mnist.vcxproj 中的内容:
bash
<ItemGroup>
<ClCompile Include="sampleOnnxMNIST.cpp" />
<ClCompile Include="../common/getopt.c" />
<ClCompile Include="../common/logger.cpp" />
</ItemGroup>
预示着此样例还使用到了 getopt.c 和 logger.cpp 源文件,所以还要把 D:\install\tensorRT\TensorRT-8.6.1.6\samples\common 下面的 getopt.c 和 logger.cpp 复制进来。
- 猜测 sample_onnx_mnist.vcxproj 中的如下内容:
bash
<ClCompile>
<AdditionalIncludeDirectories>..\..\include;..\common;..\common\windows;$(CUDA_PATH)\include;</AdditionalIncludeDirectories>
<DisableSpecificWarnings>4244;4996</DisableSpecificWarnings>
</ClCompile>
是指定 .h 头文件的目录,换成绝对路径就是如下所示:
bash
C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/include
D:/install/tensorRT/TensorRT-8.6.1.6/samples/common
D:/install/tensorRT/TensorRT-8.6.1.6/include
因为安装 tensorRT 的时候,已经将 D:/install/tensorRT/TensorRT-8.6.1.6/include 下面的头文件复制到了 C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/include 下面,所示实际不需要指定 D:/install/tensorRT/TensorRT-8.6.1.6/include 了。
- sample_onnx_mnist.vcxproj 中的如下内容:
bash
<Link>
<AdditionalDependencies>kernel32.lib;user32.lib;gdi32.lib;winspool.lib;comdlg32.lib;advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;odbc32.lib;odbccp32.lib;%(AdditionalDependencies);nvinfer.lib;nvinfer_plugin.lib;nvonnxparser.lib;nvparsers.lib;cudnn.lib;cublas.lib;cudart.lib;</AdditionalDependencies>
<GenerateDebugInformation>false</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
</Link>
格式化以后:
bash
<Link>
<AdditionalDependencies>
kernel32.lib;
user32.lib;
gdi32.lib;
winspool.lib;
comdlg32.lib;
advapi32.lib;
shell32.lib;
ole32.lib;
oleaut32.lib;
uuid.lib;
odbc32.lib;
odbccp32.lib;%(AdditionalDependencies);
nvinfer.lib;
nvinfer_plugin.lib;
nvonnxparser.lib;
nvparsers.lib;
cudnn.lib;
cublas.lib;
cudart.lib;
</AdditionalDependencies>
<GenerateDebugInformation>false</GenerateDebugInformation>
<SubSystem>Console</SubSystem>
</Link>
表示的应该是该样例需要链接的静态库,我们也不知道是什么作用,但是感觉:
bash
nvinfer.lib;
nvinfer_plugin.lib;
nvonnxparser.lib;
nvparsers.lib;
cudnn.lib;
cublas.lib;
cudart.lib;
这几个静态库是 cuda或者 tensorRT 的,肯定是需要的,这几个静态库文件在 C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/lib/x64 目录下( 可以使用 Everything 搜索 )。
4, 经过上面的初步分析,我们的 cl 编译语句如下:
bash
cl ^
-I"D:/install/tensorRT/TensorRT-8.6.1.6/samples/common" ^
-I"C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/include" ^
vs2022_cmake_sampleOnnxMNIST_test.cpp ^
getopt.c ^
logger.cpp ^
-link nvinfer.lib ^
-link nvinfer_plugin.Lib ^
-link nvonnxparser.lib ^
-link nvparsers.lib ^
-link cudnn.lib ^
-link cublas.lib ^
-link cudart.lib ^
-LIBPATH:"C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/lib/x64"
执行报错 " 无法打开包括文件: "crtdefs.h": No such file or directory",使用 Everything 搜索 crtdefs.h:

于是将 D:\install\VisualStudio2022_comm\VC\Tools\MSVC\14.40.33807\include 也加入进头文件目录,即在 cl 语句中加入:
bash
-I"D:/install/VisualStudio2022_comm/VC/Tools/MSVC/14.40.33807/include" ^
执行,报错 "无法打开包括文件: "corecrt.h": No such file or directory",搜索 corecrt.h:

于是在 cl 语句中加入:
bash
-I"C:/Program Files (x86)/Windows Kits/10/Include/10.0.26100.0/ucrt" ^
执行,报错 " 无法打开包括文件: "windows.h": No such file or directory",搜索 windows.h:

于是在 cl 语句加入:
bash
-I"C:/Program Files (x86)/Windows Kits/10/Include/10.0.26100.0/um" ^
执行,报错 " 无法打开包括文件: "winapifamily.h": No such file or directory",搜索 winapifamily.h:

于是向 cl 语句中加入如下内容:
bash
-I"C:/Program Files (x86)/Windows Kits/10/Include/10.0.26100.0/shared" ^
执行,报错 "LINK : fatal error LNK1104: 无法打开文件"libcpmt.lib"",搜索 libcpmt.lib:

于是向 cl 语句加入如下内容:
bash
-link libcpmt.lib ^
-LIBPATH:"D:/install/VisualStudio2022_comm/VC/Tools/MSVC/14.40.33807/lib/x64"
执行,报错 "LINK : fatal error LNK1104: 无法打开文件"uuid.lib"",搜索 uuid.lib:

于是向 cl 语句加入如下内容:
bash
-link uuid.lib ^
-LIBPATH:"C:/Program Files (x86)/Windows Kits/10/Lib/10.0.26100.0/um/x64"
执行,报错 "LINK : fatal error LNK1104: 无法打开文件"libucrt.lib"",搜索 libucrt.lib:

于是向 ci 语句添加如下内容:
bash
-link libucrt.lib ^
-LIBPATH:"C:/Program Files (x86)/Windows Kits/10/Lib/10.0.26100.0/ucrt/x64"
最终生成的 cl 语句如下所示:
bash
cl ^
-I"D:/install/tensorRT/TensorRT-8.6.1.6/samples/common" ^
-I"C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/include" ^
-I"D:/install/VisualStudio2022_comm/VC/Tools/MSVC/14.40.33807/include" ^
-I"C:/Program Files (x86)/Windows Kits/10/Include/10.0.26100.0/ucrt" ^
-I"C:/Program Files (x86)/Windows Kits/10/Include/10.0.26100.0/um" ^
-I"C:/Program Files (x86)/Windows Kits/10/Include/10.0.26100.0/shared" ^
vs2022_cmake_sampleOnnxMNIST_test.cpp ^
getopt.c ^
logger.cpp ^
-link nvinfer.lib ^
-link nvinfer_plugin.Lib ^
-link nvonnxparser.lib ^
-link nvparsers.lib ^
-link cudnn.lib ^
-link cublas.lib ^
-link cudart.lib ^
-link libcpmt.lib ^
-link uuid.lib ^
-link libucrt.lib ^
-LIBPATH:"C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/lib/x64" ^
-LIBPATH:"D:/install/VisualStudio2022_comm/VC/Tools/MSVC/14.40.33807/lib/x64" ^
-LIBPATH:"C:/Program Files (x86)/Windows Kits/10/Lib/10.0.26100.0/um/x64" ^
-LIBPATH:"C:/Program Files (x86)/Windows Kits/10/Lib/10.0.26100.0/ucrt/x64"
执行,成功生成 .exe 文件,运行 .exe 文件也是ok 的:
