ubuntu 18.04 cuda 11.01 gpgpu-sim 裸机编译

1,环境

ubuntu 18.04

x86_64

cuda 11.01

gpgpu-sim master

commit 90ec3399763d7c8512cfe7dc193473086c38ca38

2,预备环境

一个比较新的 ubuntu 18.04,为了迎合 cuda 11.01 的版本需求

安装如下软件:

bash 复制代码
sudo apt-get install -y     xutils-dev bison zlib1g-dev flex libglu1-mesa-dev doxygen graphviz     python-pmw python-ply python-numpy python-matplotlib python-pip libpng-dev

3,安装cuda sdk 11.01

下载:

bash 复制代码
wget https://developer.download.nvidia.com/compute/cuda/11.0.1/local_installers/cuda_11.0.1_450.36.06_linux.run

安装在目录 /home/hanmeimei/cuda/cuda

bash 复制代码
 bash cuda_11.0.1_450.36.06_linux.run --silent --toolkit --toolkitpath=/home/hanmeimei/cuda/cuda

设置环境变量:

bash 复制代码
export CUDA_INSTALL_PATH=/home/hanmeimei/cuda/cuda

4,下载编译 gpgpu-sim master

bash 复制代码
git clone https://github.com/gpgpu-sim/gpgpu-sim_distribution.git

cd gpgpu-sim_distribution/

设置环境:

bash 复制代码
 . setup_environment

make -j

5. 编译运行 cuda app

此时 nvcc 是刚才安装的 nvcc

vim vectorAdd.cu

cpp 复制代码
#include <iostream>
#include <cuda_runtime.h>
 
#define N 16384
 
// write kernel function of vector addition
__global__ void vecAdd(float *a, float *b, float *c, int n)
{
    int i = threadIdx.x + blockDim.x * blockIdx.x;
    if (i < n)
        c[i] = a[i] + b[i];
}
 
int main()
{
    float *a, *b, *c;
    float *d_a, *d_b, *d_c;
    int size = N * sizeof(float);
 
    // allocate space for device copies of a, b, c
    cudaMalloc((void **)&d_a, size);
    cudaMalloc((void **)&d_b, size);
    cudaMalloc((void **)&d_c, size);
 
    // allocate space for host copies of a, b, c and setup input values
    a = (float *)malloc(size);
    b = (float *)malloc(size);
    c = (float *)malloc(size);
 
    for (int i = 0; i < N; i++)
    {
        a[i] = i;
        b[i] = i * i;
    }
 
    // copy inputs to device
    cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
    cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
 
    // launch vecAdd() kernel on GPU
    vecAdd<<<(N + 255) / 256, 256>>>(d_a, d_b, d_c, N);
 
    cudaDeviceSynchronize();
 
    // copy result back to host
    cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
 
    // verify result
    for (int i = 0; i < N; i++)
    {
        if (a[i] + b[i] != c[i])
        {
            std::cout << "Error: " << a[i] << " + " << b[i] << " != " << c[i] << std::endl;
            break;
        }
    }
 
    std::cout << "Done!" << std::endl;
 
    // clean up
    free(a);
    free(b);
    free(c);
    cudaFree(d_a);
    cudaFree(d_b);
    cudaFree(d_c);
 
    return 0;
}

编译:

bash 复制代码
nvcc vectorAdd.cu --cudart shared -o vectorAdd

拷贝 配置文件:

bash 复制代码
cp gpgpu-sim_distribution/configs/tested-cfgs/SM7_QV100/config_volta_islip.icnt ./
bash 复制代码
 cp gpgpu-sim_distribution/configs/tested-cfgs/SM7_QV100/gpgpusim.config ./

运行app;

./vectorAdd

运行结束:

相关推荐
哦豁灬8 天前
CUDA 学习(3)——CUDA 初步实践
学习·cuda
扫地的小何尚8 天前
NVIDIA TensorRT 深度学习推理加速引擎详解
c++·人工智能·深度学习·gpu·nvidia·cuda
哦豁灬10 天前
CUDA 学习(2)——CUDA 介绍
学习·cuda
拿铁加椰果13 天前
docker 内 pytorch cuda 不可用
pytorch·docker·容器·cuda
陈 洪 伟21 天前
CUDA编程(4):共享内存:减少全局内存访问、合并全局内存访问
cuda·共享内存·全局内存
System_sleep22 天前
win11编译llama_cpp_python cuda128 RTX30/40/50版本
windows·python·llama·cuda
nuczzz23 天前
NVIDIA k8s-device-plugin源码分析与安装部署
kubernetes·k8s·gpu·nvidia·cuda
真昼小天使daisuki1 个月前
最简单的方式:如何在wsl2上配置CDUA开发环境
linux·cuda
Cony_141 个月前
Windows系统中在VSCode上配置CUDA环境
windows·vscode·cmake·cuda
NullPointerExpection1 个月前
ubuntu20.04已安装 11.6版本 cuda,现需要通过源码编译方式安装使用 cuda 加速的 ffmpeg 步骤
c++·ffmpeg·makefile·cuda