ubuntu 18.04 cuda 11.01 gpgpu-sim 裸机编译

1,环境

ubuntu 18.04

x86_64

cuda 11.01

gpgpu-sim master

commit 90ec3399763d7c8512cfe7dc193473086c38ca38

2,预备环境

一个比较新的 ubuntu 18.04,为了迎合 cuda 11.01 的版本需求

安装如下软件:

bash 复制代码
sudo apt-get install -y     xutils-dev bison zlib1g-dev flex libglu1-mesa-dev doxygen graphviz     python-pmw python-ply python-numpy python-matplotlib python-pip libpng-dev

3,安装cuda sdk 11.01

下载:

bash 复制代码
wget https://developer.download.nvidia.com/compute/cuda/11.0.1/local_installers/cuda_11.0.1_450.36.06_linux.run

安装在目录 /home/hanmeimei/cuda/cuda

bash 复制代码
 bash cuda_11.0.1_450.36.06_linux.run --silent --toolkit --toolkitpath=/home/hanmeimei/cuda/cuda

设置环境变量:

bash 复制代码
export CUDA_INSTALL_PATH=/home/hanmeimei/cuda/cuda

4,下载编译 gpgpu-sim master

bash 复制代码
git clone https://github.com/gpgpu-sim/gpgpu-sim_distribution.git

cd gpgpu-sim_distribution/

设置环境:

bash 复制代码
 . setup_environment

make -j

5. 编译运行 cuda app

此时 nvcc 是刚才安装的 nvcc

vim vectorAdd.cu

cpp 复制代码
#include <iostream>
#include <cuda_runtime.h>
 
#define N 16384
 
// write kernel function of vector addition
__global__ void vecAdd(float *a, float *b, float *c, int n)
{
    int i = threadIdx.x + blockDim.x * blockIdx.x;
    if (i < n)
        c[i] = a[i] + b[i];
}
 
int main()
{
    float *a, *b, *c;
    float *d_a, *d_b, *d_c;
    int size = N * sizeof(float);
 
    // allocate space for device copies of a, b, c
    cudaMalloc((void **)&d_a, size);
    cudaMalloc((void **)&d_b, size);
    cudaMalloc((void **)&d_c, size);
 
    // allocate space for host copies of a, b, c and setup input values
    a = (float *)malloc(size);
    b = (float *)malloc(size);
    c = (float *)malloc(size);
 
    for (int i = 0; i < N; i++)
    {
        a[i] = i;
        b[i] = i * i;
    }
 
    // copy inputs to device
    cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
    cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
 
    // launch vecAdd() kernel on GPU
    vecAdd<<<(N + 255) / 256, 256>>>(d_a, d_b, d_c, N);
 
    cudaDeviceSynchronize();
 
    // copy result back to host
    cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
 
    // verify result
    for (int i = 0; i < N; i++)
    {
        if (a[i] + b[i] != c[i])
        {
            std::cout << "Error: " << a[i] << " + " << b[i] << " != " << c[i] << std::endl;
            break;
        }
    }
 
    std::cout << "Done!" << std::endl;
 
    // clean up
    free(a);
    free(b);
    free(c);
    cudaFree(d_a);
    cudaFree(d_b);
    cudaFree(d_c);
 
    return 0;
}

编译:

bash 复制代码
nvcc vectorAdd.cu --cudart shared -o vectorAdd

拷贝 配置文件:

bash 复制代码
cp gpgpu-sim_distribution/configs/tested-cfgs/SM7_QV100/config_volta_islip.icnt ./
bash 复制代码
 cp gpgpu-sim_distribution/configs/tested-cfgs/SM7_QV100/gpgpusim.config ./

运行app;

./vectorAdd

运行结束:

相关推荐
Hi202402175 小时前
CUDA cooperative_groups grid_group测试
gpu·cuda·gpgpu
晨同学03275 小时前
RTX4060+ubuntu22.04+cuda11.8.0+cuDNN8.6.0 & 如何根据显卡型号和系统配置cuda和cuDNN所需的安装环境
cuda·cudnn
VAllen1 天前
在Windows平台使用源码编译和安装PyTorch3D指定版本
ai·pytorch3d·cuda
DogDaoDao4 天前
Windows 环境搭建 CUDA 和 cuDNN 详细教程
人工智能·windows·python·深度学习·nvidia·cuda·cudnn
Hi202402176 天前
Tesla T4 P2P测试
性能优化·gpu·cuda·性能分析
aimmon7 天前
深度学习之开发环境(CUDA、Conda、Pytorch)准备(4)
人工智能·pytorch·python·深度学习·conda·cuda
刘悦的技术博客8 天前
Win11本地部署FaceFusion3最强AI换脸,集成Tensorrt10.4推理加速,让甜品显卡也能发挥生产力
ai·tensorrt·cuda
kolaseen9 天前
pytorch 算子调用kernel示例(MINIST)
人工智能·pytorch·python·深度学习·机器学习·gpu·cuda
kolaseen10 天前
pytorch 同步机制
人工智能·pytorch·python·深度学习·机器学习·cuda