marsupialtail / gpu-sparsert

MIT License
17 stars 6 forks source link

run autotune_conv_float.sh with nvprof reports illegal memory access in CUDA-10.2 #4

Closed xysmlx closed 2 years ago

xysmlx commented 3 years ago

Using this command nvprof --profile-child-processes bash autotune_conv_float.sh 512 512 7 filter_bg4.npy to profile the sparse convolution will report illegal memory access when I set checkCudaErrors(cuEventSynchronize(stop)); in this line.

==19226== NVPROF is profiling process 19226, command: ./exe
> Using device 0: Tesla V100-PCIE-16GB> GPU Device has SM 7.0 compute capability0
picked algorithm: 6
Workspace size: 25.002MB
baseline used 0.163843
0.104157
direct used 0.0002848
0
32 1 49
CUDA Driver API error = 0700 from file <sparsednn/driver_conv.cu>.
Difference:  nan

However, SparseRT works well when I run bash autotune_conv_float.sh 512 512 7 filter_bg4.npy directly.

> Using device 0: Tesla V100-PCIE-16GB> GPU Device has SM 7.0 compute capability0
picked algorithm: 6
Workspace size: 25.002MB
baseline used 0.164115
0.104157
direct used 0.0002848
0
32 1 49
kernel used 0.0733642
0.104157
Difference: 0.00319038

Environment: Host: Ubuntu-16.04.5, GCC-5.4.0 Device: Tesla V100-PCIE-16GB, CUDA-10.2, cuDNN-8.0

Besides, I have also tested nvprof --profile-child-processes bash autotune_conv_float.sh 512 512 7 filter_bg4.npy in CUDA-10.0. It does not report such errors, but it also does not display any mm kernel information. Here is the nvprof result in CUDA-10.0.

==35760== NVPROF is profiling process 35760, command: ./exe
> Using device 0: Tesla V100-PCIE-16GB> GPU Device has SM 7.0 compute capability0
picked algorithm: 6
Workspace size: 25.002MB
baseline used 0.164045
0.104157
direct used 0.0002848
0
32 1 49
kernel used 0.0740016
0.104157
Difference: 0.00319038
==35760== Profiling application: ./exe
==35760== Profiling result:
            Type  Time(%)      Time     Calls       Avg       Min       Max  Name
 GPU activities:   51.27%  3.4392ms         3  1.1464ms  1.6640us  3.4246ms  [CUDA memcpy HtoD]
                   38.11%  2.5568ms        20  127.84us  127.23us  135.78us  volta_scudnn_winograd_128x128_ldg1_ldg4_relu_tile148t_nt_v1
                   10.36%  695.13us        20  34.756us  33.696us  39.392us  void cudnn::winograd::generateWinogradTilesKernel<int=0, float, float>(cudnn::winograd::GenerateWinogradTilesParams<float, float>)
                    0.14%  9.0880us         1  9.0880us  9.0880us  9.0880us  [CUDA memcpy DtoH]
                    0.12%  8.2560us         6  1.3760us  1.2800us  1.7600us  [CUDA memset]
      API calls:   53.34%  2.67304s         8  334.13ms  2.2000us  2.67302s  cudaStreamCreateWithFlags
                   40.65%  2.03705s        40  50.926ms  4.1000us  2.03683s  cudaLaunchKernel
                    5.46%  273.71ms         1  273.71ms  273.71ms  273.71ms  cuCtxCreate
                    0.29%  14.486ms         1  14.486ms  14.486ms  14.486ms  cuModuleLoad
                    0.08%  3.9633ms         4  990.82us  27.899us  3.7999ms  cudaMemcpy
                    0.06%  2.8321ms         2  1.4161ms  7.7000us  2.8244ms  cudaEventSynchronize
                    0.02%  1.1775ms         2  588.73us  578.18us  599.28us  cuDeviceTotalMem
                    0.02%  1.0805ms        12  90.038us  3.8000us  308.79us  cudaMalloc
                    0.02%  1.0800ms         1  1.0800ms  1.0800ms  1.0800ms  cudaHostAlloc
                    0.02%  976.97us         3  325.66us  51.198us  825.67us  cuDeviceGetName
                    0.02%  799.27us       190  4.2060us     100ns  197.19us  cuDeviceGetAttribute
                    0.00%  219.99us         4  54.998us  2.6000us  207.89us  cudaStreamCreateWithPriority
                    0.00%  135.90us         3  45.298us     600ns  134.10us  cudaFree
                    0.00%  93.796us         4  23.449us  9.5000us  58.298us  cudaMemsetAsync
                    0.00%  85.498us        76  1.1240us     100ns  5.1000us  cudaFuncSetAttribute
                    0.00%  60.298us         2  30.149us  23.499us  36.799us  cudaMemset
                    0.00%  29.699us         1  29.699us  29.699us  29.699us  cudaProfilerStart
                    0.00%  19.100us         2  9.5500us  2.1000us  17.000us  cudaGetDevice
                    0.00%  15.599us         4  3.8990us  1.8000us  7.5000us  cudaEventRecord
                    0.00%  15.200us        28     542ns     400ns  2.4000us  cudaEventCreateWithFlags
                    0.00%  14.700us         3  4.9000us  1.9000us  7.2000us  cudaDeviceSynchronize
                    0.00%  12.100us        29     417ns     200ns  1.8000us  cudaDeviceGetAttribute
                    0.00%  9.8000us        66     148ns     100ns  1.1000us  cudaGetLastError
                    0.00%  8.5990us         4  2.1490us     100ns  7.0000us  cuDeviceGet
                    0.00%  8.2000us         1  8.2000us  8.2000us  8.2000us  cuDeviceGetPCIBusId
                    0.00%  5.4000us         2  2.7000us  1.1000us  4.3000us  cuInit
                    0.00%  5.3000us         2  2.6500us     800ns  4.5000us  cudaEventCreate
                    0.00%  3.0000us         1  3.0000us  3.0000us  3.0000us  cudaEventElapsedTime
                    0.00%  2.4000us         5     480ns     200ns  1.3000us  cuDeviceGetCount
                    0.00%  2.3000us         1  2.3000us  2.3000us  2.3000us  cudaHostGetDevicePointer
                    0.00%  2.0000us         1  2.0000us  2.0000us  2.0000us  cuModuleGetFunction
                    0.00%  1.6000us         1  1.6000us  1.6000us  1.6000us  cudaDeviceGetStreamPriorityRange
                    0.00%  1.3990us         1  1.3990us  1.3990us  1.3990us  cudaGetDeviceCount
                    0.00%  1.3000us         1  1.3000us  1.3000us  1.3000us  cuDeviceComputeCapability
                    0.00%     700ns         2     350ns     300ns     400ns  cuDeviceGetUuid
                    0.00%     700ns         1     700ns     700ns     700ns  cuDriverGetVersion
marsupialtail commented 2 years ago

I have no idea. Sorry didn't see this issue until now.