CUDA 计时功能,记录GPU程序/函数耗时,cudaEventCreate,cudaEventRecord,cudaEventElapsedTime

为了测试GPU函数的耗时,可以使用 CUDA 提供的计时功能:cudaEventCreate , cudaEventRecord , 和 cudaEventElapsedTime。这些函数可以帮助你测量某个 CUDA 操作(如设置设备)所花费的时间。

一、记录耗时案例

以下是一个示例程序,它测量调用 cudaSetDevice 所花费的时间:

cpp 复制代码
#include <iostream>
#include <vector>
#include <cuda_runtime.h>

 
__global__ void dummyKernel() {

    // Dummy kernel to ensure CUDA context is initialized
}

 

int main() {

    // CUDA device IDs
    int device1 = 0;
    int numIterations = 10; // Number of times to call cudaSetDevice

 
    // Create CUDA events
    cudaEvent_t start, stop;
    cudaEventCreate(&start);
    cudaEventCreate(&stop);

    // Vector to store elapsed times
    std::vector<float> elapsedTimes(numIterations);

 
    // Set initial device (optional, but ensures a known starting state)
    cudaSetDevice(device1);

 
    // Measure time for multiple cudaSetDevice calls
    for (int i = 0; i < numIterations; ++i) {
        // Record the start event
        cudaEventRecord(start, 0);
 
        // Set the device (this is the operation we are timing)
        cudaSetDevice(device1);

        // Record the stop event
        cudaEventRecord(stop, 0);

        // Measure the elapsed time between the start and stop events
        cudaEventElapsedTime(&elapsedTimes[i], start, stop);

        // Output results
        std::cout << "Number of iterations: i " << i << std::endl;

        std::cout << " time to set device " << device1 << ": " << elapsedTimes[i] << " ms" << std::endl;

    }

 

    // Calculate statistics (e.g., average time)
    float totalTime = 0.0f;
    for (float time : elapsedTimes) {
        totalTime += time;
    }
    float averageTime = totalTime / numIterations;

 

    // Output results
    std::cout << "Number of iterations: " << numIterations << std::endl;
    std::cout << "Average time to set device " << device1 << ": " << averageTime << " ms" << std::endl;

 
    // Optionally, run a dummy kernel to ensure CUDA is initialized and ready
    dummyKernel<<<1, 1>>>();
    cudaDeviceSynchronize();
 

    // Clean up
    cudaEventDestroy(start);
    cudaEventDestroy(stop);

    return 0;
}

二、编译和运行

2.1 编译 : 使用 nvcc 编译这个 CUDA 程序。(上面程序文件铭为test_cudaSetDevice_multiple.cu)

bash 复制代码
nvcc -o test_cudaSetDevice_multiple test_cudaSetDevice_multiple.cu

2.2 运行: ,然后运行生成的可执行文件。

bash 复制代码
./test_cudaSetDevice_multiple

哈哈哈,就得到运行结果啦!

相关推荐
七宝大爷9 小时前
CUDA与cuDNN:深度学习加速库
人工智能·深度学习·cuda·cudnn
@Wufan1 天前
ubuntu服务器子用户(无sudo权限)安装/切换多个版本cuda
linux·服务器·ubuntu·cuda
FF-Studio3 天前
解决 NVIDIA RTX 50 系列 (sm_120) 架构下的 PyTorch 与 Unsloth 依赖冲突
pytorch·自然语言处理·cuda·unsloth·rtx 50 series
FF-Studio4 天前
RTX 5060 Ti Linux 驱动黑屏避坑指南:CUDA 13.1, Open Kernel 与 BIOS 设置
linux·运维·服务器·cuda
james bid4 天前
MacBook Pro 2015 上 XUbuntu 24.04 启用 eGPU (GeForce GTX 1080 Ti) 和核显黑屏问题解决
linux·ubuntu·macos·cuda·egpu
Peter·Pan爱编程5 天前
cmake 升级
c++·cmake·cuda
Eloudy5 天前
cudaEventCreateWithFlags 的 cudaEventInterprocess 和 cudaEventDisableTiming
gpu·cuda·arch
self-motivation9 天前
cuda编程 --------- warp 级别规约指令 __shfl_xor_sync
cuda·hpc·warp·shfl_xor_sync·dot product
云雾J视界9 天前
高性能计算新范式:用Thrust和cuRAND库重构蒙特卡罗仿真
gpu·cuda·高性能计算·thrust·蒙特卡罗·curand·摩根大通
封奚泽优9 天前
Deep-Live-Cam(调试和求助)
git·python·ffmpeg·pip·cuda