henuwpf / deep-learning-faces

Automatically exported from code.google.com/p/deep-learning-faces
GNU General Public License v3.0
0 stars 0 forks source link

Assertion Failed: trns_high not always >= trns_low #4

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
I have successfully compiled it on Ubuntu14.04(64bits) with Cuda6.5. 
When I run "script_face_exp.m", I got the following errors:

--------------------------------------------------------------------
~~FERMI~~
 {Input} (-1)-->layer{0} Convdata:   nFilters:1 nIJ_grid:48 48, dropout:0.000 
{Hidn} (0)-->layer{1}  ImageMirror:    nVisChannels:1 nVisIJ:[48 48],Error 
using mexcuConvNNoo
Assertion Failed: trns_high not always >= trns_low

Error in myclassify_conv_nn_softmax (line 174)
        [~, model.theta] = mexcuConvNNoo( single(ww), params,
        model.callback_name);

Error in fe_cv_48 (line 143)
    [model] = myclassify_conv_nn_softmax(model);

Error in script_face_exp (line 72)
[cv_average,cv_models]=fe_cv_48(hp.nSPLIT,hp.randseeds,hp.normalseeds,hp);

----------------------------------------------------------------
My GPU info is as below (./deviceQuery ):
Detected 1 CUDA Capable device(s)

Device 0: "GeForce GTX 645"
  CUDA Driver Version / Runtime Version          6.5 / 6.5
  CUDA Capability Major/Minor version number:    3.0
  Total amount of global memory:                 1024 MBytes (1073414144 bytes)
  ( 3) Multiprocessors, (192) CUDA Cores/MP:     576 CUDA Cores
  GPU Clock rate:                                824 MHz (0.82 GHz)
  Memory Clock rate:                             2000 Mhz
  Memory Bus Width:                              128-bit
  L2 Cache Size:                                 262144 bytes
  Maximum Texture Dimension Size (x,y,z)         1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
  Maximum Layered 1D Texture Size, (num) layers  1D=(16384), 2048 layers
  Maximum Layered 2D Texture Size, (num) layers  2D=(16384, 16384), 2048 layers
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       49152 bytes
  Total number of registers available per block: 65536
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  2048
  Maximum number of threads per block:           1024
  Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
  Max dimension size of a grid size    (x,y,z): (2147483647, 65535, 65535)
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             512 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     Yes
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      Yes
  Device PCI Bus ID / PCI location ID:           1 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime 
Version = 6.5, NumDevs = 1, Device0 = GeForce GTX 645
Result = PASS
----------------------------------------------------------------

Thanks!

best,
-Lei

Original issue reported on code.google.com by qule...@gmail.com on 15 Dec 2014 at 1:08

GoogleCodeExporter commented 8 years ago
has anyone solved this problem yet?

Original comment by johnnyco...@gmail.com on 17 Apr 2015 at 1:13

GoogleCodeExporter commented 8 years ago
I also have the same problem. Anyone solved?

In addition, I have read the context code about this issue.The trns_low and 
trns_high are loaded by GPU. Both the two parameters are from  the 3rd layer 
named 'convxyrs'.

The neural net's configuration is from the file 'net_config_basic42.m'. Keeping 
default setup, trns_low is 'single([21 21 -pi/4 0.8])', and trns_high is 
'single([27 27 pi/4 1.2])'. 

And then in the file 'cu_jitters.h',line 168 and  line 169, the trns_low and 
trns_high are compared one element by one element. If each element of trns_high 
is equal or greater than  the corresponding element in trns_low, it will be OK.
OBviously, this condition is met. BUT, this error is always on . I have no idea 
now.

Original comment by sundongh...@gmail.com on 8 Jul 2015 at 3:15

GoogleCodeExporter commented 8 years ago
[deleted comment]
GoogleCodeExporter commented 8 years ago
I JUST COMMENT LINE 168 IN ./CUDA_UT/MODULES/CONV/CU_JITTERS.H
THEN, IT WORKS OK.

//clASSERT(Sum2DInplace(transform_range) == 4, "trns_high not always >= 
trns_low");

Original comment by caijinzh...@gmail.com on 10 Aug 2015 at 5:05