prabindh / darknet

Convolutional Neural Networks
http://pjreddie.com/darknet/
Other
122 stars 46 forks source link

Getting bad error when test image using c++ wrapper with GPU enabled #36

Closed vg123 closed 7 years ago

vg123 commented 7 years ago

I am using p2.xlarge EC2 instance for testing. In makefile of darknet GPU=1 CUDNN = 1 OPENCV = 1 DEBUG =1

In Makefile of arapaho

# Makefile to build the test-wrapper for Arapaho
# Undefine GPU, CUDNN if darknet was built without these defined. These 2 flags have to match darknet flags.
# https://github.com/prabindh/darknet  #add  -DGPU -DCUDNN after arapaho.cpp and before -D_DEBUG

arapaho: clean
        g++ test.cpp arapaho.cpp -DGPU -DCUDNN -D_DEBUG -I../src/ -I/usr/local/cuda/include/ -L./ -ldarknet-cpp-shared -L/usr/local/lib  -lopencv_stitching  -lopencv_su$

clean:
        rm -rf ./arapaho.out

Error

ubuntu@ip-10-0-0-226:~/darknet-cpp/darknet/arapaho$ make arapaho 
rm -rf ./arapaho.out
g++ test.cpp arapaho.cpp -DGPU -DCUDNN -D_DEBUG -I../src/ -I/usr/local/cuda/include/ -L./ -ldarknet-cpp-shared -L/usr/local/lib  -lopencv_stitching  -lopencv_superres  -lopencv_videostab   -lopencv_calib3d -lopencv_features2d -lopencv_objdetect -lopencv_highgui  -lopencv_photo  -lopencv_video -lopencv_ml -lopencv_imgproc -lopencv_flann   -lopencv_core  -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand -std=c++11 -o arapaho.out
ubuntu@ip-10-0-0-226:~/darknet-cpp/darknet/arapaho$ ./arapaho.out 
layer     filters    size              input                output
    0 conv     32  3 x 3 / 1   416 x 416 x   3   ->   416 x 416 x  32
    1 max          2 x 2 / 2   416 x 416 x  32   ->   208 x 208 x  32
    2 conv     64  3 x 3 / 1   208 x 208 x  32   ->   208 x 208 x  64
    3 max          2 x 2 / 2   208 x 208 x  64   ->   104 x 104 x  64
    4 conv    128  3 x 3 / 1   104 x 104 x  64   ->   104 x 104 x 128
    5 conv     64  1 x 1 / 1   104 x 104 x 128   ->   104 x 104 x  64
    6 conv    128  3 x 3 / 1   104 x 104 x  64   ->   104 x 104 x 128
    7 max          2 x 2 / 2   104 x 104 x 128   ->    52 x  52 x 128
    8 conv    256  3 x 3 / 1    52 x  52 x 128   ->    52 x  52 x 256
    9 conv    128  1 x 1 / 1    52 x  52 x 256   ->    52 x  52 x 128
   10 conv    256  3 x 3 / 1    52 x  52 x 128   ->    52 x  52 x 256
   11 max          2 x 2 / 2    52 x  52 x 256   ->    26 x  26 x 256
   12 conv    512  3 x 3 / 1    26 x  26 x 256   ->    26 x  26 x 512
   13 conv    256  1 x 1 / 1    26 x  26 x 512   ->    26 x  26 x 256
   14 conv    512  3 x 3 / 1    26 x  26 x 256   ->    26 x  26 x 512
   15 conv    256  1 x 1 / 1    26 x  26 x 512   ->    26 x  26 x 256
   16 conv    512  3 x 3 / 1    26 x  26 x 256   ->    26 x  26 x 512
   17 max          2 x 2 / 2    26 x  26 x 512   ->    13 x  13 x 512
   18 conv   1024  3 x 3 / 1    13 x  13 x 512   ->    13 x  13 x1024
   19 conv    512  1 x 1 / 1    13 x  13 x1024   ->    13 x  13 x 512
   20 conv   1024  3 x 3 / 1    13 x  13 x 512   ->    13 x  13 x1024
   21 conv    512  1 x 1 / 1    13 x  13 x1024   ->    13 x  13 x 512
   22 conv   1024  3 x 3 / 1    13 x  13 x 512   ->    13 x  13 x1024
   23 conv   1024  3 x 3 / 1    13 x  13 x1024   ->    13 x  13 x1024
   24 conv   1024  3 x 3 / 1    13 x  13 x1024   ->    13 x  13 x1024
   25 route  16
   26 reorg              / 2    26 x  26 x 512   ->    13 x  13 x2048
   27 route  26 24
   28 conv   1024  3 x 3 / 1    13 x  13 x3072   ->    13 x  13 x1024
   29 conv    210  1 x 1 / 1    13 x  13 x1024   ->    13 x  13 x 210
   30 detection
Setup: net.n = 31
net.layers[0].batch = 8
Loading weights from input.weights...mj = 0, mn = 1, *(net->seen) = 448000
load_convolutional_weights: l.n*l.c*l.size*l.size = 864
load_convolutional_weights: l.n*l.c*l.size*l.size = 18432
load_convolutional_weights: l.n*l.c*l.size*l.size = 73728
load_convolutional_weights: l.n*l.c*l.size*l.size = 8192
load_convolutional_weights: l.n*l.c*l.size*l.size = 73728
load_convolutional_weights: l.n*l.c*l.size*l.size = 294912
load_convolutional_weights: l.n*l.c*l.size*l.size = 32768
load_convolutional_weights: l.n*l.c*l.size*l.size = 294912
load_convolutional_weights: l.n*l.c*l.size*l.size = 1179648
load_convolutional_weights: l.n*l.c*l.size*l.size = 131072
load_convolutional_weights: l.n*l.c*l.size*l.size = 1179648
load_convolutional_weights: l.n*l.c*l.size*l.size = 131072
load_convolutional_weights: l.n*l.c*l.size*l.size = 1179648
load_convolutional_weights: l.n*l.c*l.size*l.size = 4718592
load_convolutional_weights: l.n*l.c*l.size*l.size = 524288
load_convolutional_weights: l.n*l.c*l.size*l.size = 4718592
load_convolutional_weights: l.n*l.c*l.size*l.size = 524288
load_convolutional_weights: l.n*l.c*l.size*l.size = 4718592
load_convolutional_weights: l.n*l.c*l.size*l.size = 9437184
load_convolutional_weights: l.n*l.c*l.size*l.size = 9437184
load_convolutional_weights: l.n*l.c*l.size*l.size = 28311552
load_convolutional_weights: l.n*l.c*l.size*l.size = 215040
Done!
Setup: layers = -1121619809, -1158304637, -1116303048
Warning: Read classes from cfg (1012195420) > maxClasses (2)
Image expected w,h = [416][416]!
Error allocating boxes/probs, (nil)/(nil) !
Setup failed!
ubuntu@ip-10-0-0-226:~/darknet-cpp/darknet/arapaho$ 

Initially It was giving me following warning libdc1394 error: Failed to initialize libdc1394 But then I run this command sudo ln /dev/null /dev/raw1394. this removed the warning statement.

Tell me how can I resolve It Please ?

There is no error coming when I am testing it on CPU.

prabindh commented 7 years ago

What was the issue, and what fixed it ? Thanks

visha-l commented 7 years ago

Issue I have posted in my first comment, and It got fixed just by hit and trial, what I mean by that I removed all my code and installed it again then also it was giving me same error and now I run this command sudo apt-get install libdc1394-22-dev libdc1394-22 libdc1394-utils and also made some little changes (I don't know exactly what were those changes) which finally solved my problem. I am sorry , It won't be helpful .