ubuntu 18.04 cuda 11.01 gpgpu-sim 裸机编译

1,环境

ubuntu 18.04

x86_64

cuda 11.01

gpgpu-sim master

commit 90ec3399763d7c8512cfe7dc193473086c38ca38

2,预备环境

一个比较新的 ubuntu 18.04,为了迎合 cuda 11.01 的版本需求

安装如下软件:

bash 复制代码
sudo apt-get install -y     xutils-dev bison zlib1g-dev flex libglu1-mesa-dev doxygen graphviz     python-pmw python-ply python-numpy python-matplotlib python-pip libpng-dev

3,安装cuda sdk 11.01

下载:

bash 复制代码
wget https://developer.download.nvidia.com/compute/cuda/11.0.1/local_installers/cuda_11.0.1_450.36.06_linux.run

安装在目录 /home/hanmeimei/cuda/cuda

bash 复制代码
 bash cuda_11.0.1_450.36.06_linux.run --silent --toolkit --toolkitpath=/home/hanmeimei/cuda/cuda

设置环境变量:

bash 复制代码
export CUDA_INSTALL_PATH=/home/hanmeimei/cuda/cuda

4,下载编译 gpgpu-sim master

bash 复制代码
git clone https://github.com/gpgpu-sim/gpgpu-sim_distribution.git

cd gpgpu-sim_distribution/

设置环境:

bash 复制代码
 . setup_environment

make -j

5. 编译运行 cuda app

此时 nvcc 是刚才安装的 nvcc

vim vectorAdd.cu

cpp 复制代码
#include <iostream>
#include <cuda_runtime.h>
 
#define N 16384
 
// write kernel function of vector addition
__global__ void vecAdd(float *a, float *b, float *c, int n)
{
    int i = threadIdx.x + blockDim.x * blockIdx.x;
    if (i < n)
        c[i] = a[i] + b[i];
}
 
int main()
{
    float *a, *b, *c;
    float *d_a, *d_b, *d_c;
    int size = N * sizeof(float);
 
    // allocate space for device copies of a, b, c
    cudaMalloc((void **)&d_a, size);
    cudaMalloc((void **)&d_b, size);
    cudaMalloc((void **)&d_c, size);
 
    // allocate space for host copies of a, b, c and setup input values
    a = (float *)malloc(size);
    b = (float *)malloc(size);
    c = (float *)malloc(size);
 
    for (int i = 0; i < N; i++)
    {
        a[i] = i;
        b[i] = i * i;
    }
 
    // copy inputs to device
    cudaMemcpy(d_a, a, size, cudaMemcpyHostToDevice);
    cudaMemcpy(d_b, b, size, cudaMemcpyHostToDevice);
 
    // launch vecAdd() kernel on GPU
    vecAdd<<<(N + 255) / 256, 256>>>(d_a, d_b, d_c, N);
 
    cudaDeviceSynchronize();
 
    // copy result back to host
    cudaMemcpy(c, d_c, size, cudaMemcpyDeviceToHost);
 
    // verify result
    for (int i = 0; i < N; i++)
    {
        if (a[i] + b[i] != c[i])
        {
            std::cout << "Error: " << a[i] << " + " << b[i] << " != " << c[i] << std::endl;
            break;
        }
    }
 
    std::cout << "Done!" << std::endl;
 
    // clean up
    free(a);
    free(b);
    free(c);
    cudaFree(d_a);
    cudaFree(d_b);
    cudaFree(d_c);
 
    return 0;
}

编译:

bash 复制代码
nvcc vectorAdd.cu --cudart shared -o vectorAdd

拷贝 配置文件:

bash 复制代码
cp gpgpu-sim_distribution/configs/tested-cfgs/SM7_QV100/config_volta_islip.icnt ./
bash 复制代码
 cp gpgpu-sim_distribution/configs/tested-cfgs/SM7_QV100/gpgpusim.config ./

运行app;

./vectorAdd

运行结束:

相关推荐
黄白柴柴3 天前
cudnn版本gpu架构
cuda·cudnn
IT修炼家7 天前
auto-gptq安装以及不适配软硬件环境可能出现的问题及解决方式
大模型·cuda·auto-gptq
爱听歌的周童鞋8 天前
Depth-Anything推理详解及部署实现(下)
tensorrt·cuda·deploy·mde·depth anything
因为风的缘故~8 天前
Ubuntu22.04安装cuda12.1+cudnn8.9.2+TensorRT8.6.1+pytorch2.3.0+opencv_cuda4.9+onnxruntime-gpu1.18
pytorch·深度学习·tensorrt·cuda·anaconda·cudnn
ai-guoyang9 天前
tensorflow gpu版安装(直接anaconda虚拟环境中配置cuda,无需主机安装cuda、cudnn)
深度学习·tensorflow·cuda·anaconda
self-motivation9 天前
gpu硬件架构
硬件架构·gpu·nvidia·tensor·cuda
枫舞雪域10 天前
Ubuntu22.04安装英伟达驱动
linux·笔记·cuda·isaacsim·iassclab
爱串门的小马驹13 天前
CUDA 计时功能,记录GPU程序/函数耗时,cudaEventCreate,cudaEventRecord,cudaEventElapsedTime
cuda
程序员非鱼14 天前
深入解析神经网络的GPU显存占用与优化
人工智能·深度学习·神经网络·机器学习·nvidia·cuda
安全二次方security²15 天前
基于RISC-V的开源通用GPU指令集架构--乘影OpenGPGPU
risc-v·opencl·gpgpu·乘影·向量扩展指令集·gpgpu微架构·开源通用gpu