zhreshold / mxnet-ssd

MXNet port of SSD: Single Shot MultiBox Object Detector. Reimplementation of https://github.com/weiliu89/caffe/tree/ssd
MIT License
764 stars 337 forks source link

src/operator/multibox_detection.cu:198: Check failed: (error) == (cudaSuccess) too many resources requested for launch #13

Closed yanghy1966 closed 7 years ago

yanghy1966 commented 7 years ago

Hi!

When I ran this command: "python demo.py", It occured: "src/operator/multibox_detection.cu:198: Check failed: (error) == (cudaSuccess) too many resources requested for launch"

what's wrong with it? Please help me! Thank you!

yanghy1966 commented 7 years ago

My GPU is GTX 1080.

matakk commented 7 years ago

Oh! I have the same error. My GPU is GTX 1070.

but I use the cpu mode is working.

zhreshold commented 7 years ago

Seems like a cuda launching problem, which version of cuda are you using

matakk commented 7 years ago

ubuntu 16.04 gtx 1070 cuda 8.0

yanghy1966 commented 7 years ago

ubuntu 16.04 cuda 8.0

zhreshold commented 7 years ago

I cannot reproduce the errors. Could you guys run deviceQuery in CUDA toolkit samples, and post the result here? @yanghy1966 @matakk

matakk commented 7 years ago

root@u-System-Product-Name:~/nvidia_cuda8_samples/1_Utilities/deviceQuery# ./deviceQuery ./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 1070" CUDA Driver Version / Runtime Version 8.0 / 8.0 CUDA Capability Major/Minor version number: 6.1 Total amount of global memory: 8110 MBytes (8504279040 bytes) (15) Multiprocessors, (128) CUDA Cores/MP: 1920 CUDA Cores GPU Max Clock rate: 1759 MHz (1.76 GHz) Memory Clock rate: 4004 Mhz Memory Bus Width: 256-bit L2 Cache Size: 2097152 bytes Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384) Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers Total amount of constant memory: 65536 bytes Total amount of shared memory per block: 49152 bytes Total number of registers available per block: 65536 Warp size: 32 Maximum number of threads per multiprocessor: 2048 Maximum number of threads per block: 1024 Max dimension size of a thread block (x,y,z): (1024, 1024, 64) Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535) Maximum memory pitch: 2147483647 bytes Texture alignment: 512 bytes Concurrent copy and kernel execution: Yes with 2 copy engine(s) Run time limit on kernels: Yes Integrated GPU sharing Host Memory: No Support host page-locked memory mapping: Yes Alignment requirement for Surfaces: Yes Device has ECC support: Disabled Device supports Unified Addressing (UVA): Yes Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0 Compute Mode: < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 8.0, CUDA Runtime Version = 8.0, NumDevs = 1, Device0 = GeForce GTX 1070 Result = PASS root@u-System-Product-Name:~/nvidia_cuda8_samples/1_Utilities/deviceQuery#

zhreshold commented 7 years ago

This is weird,

Total number of registers available per block: 65536
# the ptxas info of the forward kernel
ptxas info    : Used 27 registers, 4 bytes smem, 109 bytes cmem[0], 20 bytes cmem[16]

using mshadow defined 1024 threads per block, 27 * 1024 = 27648 << 65536 The registers are far more than enough.

Could you try modify this line https://github.com/zhreshold/mxnet/blob/ae6b072ca18f24631fb9e81d65f3f36c90c68fa7/src/operator/multibox_detection.cu#L189

# modify this
const int num_threads = cuda::kMaxThreadsPerBlock;
# to
const int num_threads = cuda::kMaxThreadsPerBlock / 2;  // or even smaller number like 256

Recompile and see what's happening?

zhreshold commented 7 years ago

I've tested even low end graphic card, it works well. Try update the nvidia driver first, which may cause some problem since pascal cards are quite new. There might be some compatibility issue.

ysh329 commented 7 years ago

I met this question, too! trying......

Primus-zhao commented 7 years ago

@zhreshold @yanghy1966 @matakk @ysh329 I guess you set debug to 1 in config.mk? I think this mode requires more resource than normal mode. Two solutions:

  1. if you don't need to use gdb for debugging, maybe you can close debug item in config.mk
  2. if debug is the necessary, you can set num_threads in .cu file to a small value(divide 10 works for me). Hope it helps!
Primus-zhao commented 7 years ago

By the way, my gpu is GTX 1080 too.

ysh329 commented 7 years ago

@Primus-zhao You can refer to this issue below: SSD example CUDA error: too many resources requested for launch · Issue #5170 · dmlc/mxnet https://github.com/dmlc/mxnet/issues/5170

Primus-zhao commented 7 years ago

@ysh329 Thks! Have checked that. In fact I just used the original example code from mxnet and it seems working fine.

liangfu commented 7 years ago

Note that I had exactly the same problem, but after I recompiled OpenCV with CUDA disabled, and recompiled MXNet with CMAKE_BUILD_TYPE=Release, everything works fine again. Hope this helps!