Open jwang891 opened 3 years ago
do you have solved this problem? I'm running deepbench on the sim and meet the same problems with you? I look forward to communicating with you. e-mail:564623804@qq.com
Hi, I meet the same problem while running cuparse code. Have you solved this problem? Thanks in advance!
Hi, I meet the same problem while running cuparse code. Have you solved this problem? Thanks in advance!
I think this is the same as #263 -- see my answer there. Also, in the Accel-Sim repo they have support for DeepBench already (https://github.com/accel-sim/gpu-app-collection/tree/release/src/cuda/DeepBench), so you shouldn't need to recreate anything. If you want to do in standalone GPGPU-Sim, you'd have to do what I mentioned in #263.
Matt
now i am using accel-sim,but i meet the same problem,i really wonder how to solve it
Hi All,
I am trying to use the GPGPU Sim 4.0 to implement Deepbench benchmarks. https://github.com/gpgpu-sim/gpgpu-sim_distribution https://github.com/baidu-research/DeepBench
I follow the instructions of the first link to implement Deepbench’s Nvidia gemm_bench and conv_bench benchmarks.
Since the Deepbench utilizes cuDNN and cuBLAS, when I compile the benchmarks, I change follow the below instructions: • -L$(CUDA_PATH)/lib64 -lcublas to -L$(CUDA_PATH)/lib64 -lcublas_static • -L$(CUDNN_PATH)/lib64 -lcudnn to -L$(CUDNN_PATH)/lib64 -lcudnn_static
However, during the compilation, some libraries are missed when switching to the static link. To go through the compilation, I add the necessary libraries manually when compiling the benchmarks, for example -lculibos -lpthread -ldl.
After adding the necessary libraries, the benchmarks can be successfully compiled. However, when trying to run the compiled files with the GPGPU Sim, I confront several issues. I wonder if these issues are caused by the compilation. (I implement the compiled files without the GPGPU Sim and they can be run successfully with the real GPU card - GTX2080 ).
For the gemm_bench, it seems like it cannot complete successfully. After destroying the streams of kernel 2, at the step of “END-of-Interconnect-DETAILS”, the GPGPU Sim 4.0 reports Segmentation fault (core dumped). I do not know if there is any memory issue when running gemm_bench.
For the conv_bench, when parsing conv_bench.111.sm_70.ptx, it reports the error:” GPGPU-Sim PTX: instruction assembly for function '_Z15persistRNN_initIdEvPdi'... conv_bench: ptx_ir.cc:302: void symbol_table::set_label_address(const symbol*, unsigned int): Assertion `i != m_symbols.end()' failed. Aborted (core dumped)”.
I see the paper “Accel-Sim: An Extensible Simulation Framework for Validated GPU Modeling” and find that the Deepbench can be successfully implemented by GPGPU Sim 4.0. I would like to seek for some guidance on the implementation of Deepbench.
Any help is appreciated and great thanks in advance!
Regards
Jianda Wang