layumi / 2016_person_re-ID

TOMM2017 A Discriminatively Learned CNN Embedding for Person Re-identification
https://dl.acm.org/citation.cfm?id=3159171
MIT License
264 stars 116 forks source link

undefined function or variable 'dagnn.Square' #10

Open sde123 opened 7 years ago

sde123 commented 7 years ago

@layumi hello I am running the demo_heatmap.m.But I got a error

undefined function or variable 'dagnn.Square'

I have install matconvnet_beta23 in the matlabR2014a Could you please tell me what is wrong? Thankyou

layumi commented 7 years ago

Hi @sde123, I added some layers to matconvnet and I also included these layers in this repos. In fact, you do not need to install the original Matconvnet. I have included all necessary files in this repos. You can just download and run it. More information can be find in README.

sde123 commented 7 years ago

@layumi hello Thank you But when I run the gpu_compile.m I got an error

/home/dai/code/person_reidentification/5/Untitled Folder/2016_person_re-ID-master/matlab/src/bits/impl/bilinearsampler_gpu.cu(247): warning: variable "backward" was declared but never referenced
          detected during instantiation of "vl::ErrorCode vl::impl::bilinearsampler<vl::VLDT_GPU, type>::forward(vl::Context &, type *, const type *, const type *, size_t, size_t, size_t, size_t, size_t, size_t, size_t) [with type=float]" 
(364): here

/home/dai/code/person_reidentification/5/Untitled Folder/2016_person_re-ID-master/matlab/src/bits/impl/bilinearsampler_gpu.cu(247): warning: variable "backward" was declared but never referenced

Could please tell me what is wrong, I am in the ubuntu14.04,matlabR2014a

layumi commented 7 years ago

I haven't met such error. Would you like to provide the whole log?

sde123 commented 7 years ago

@layumi Thankyou When I run the train_id_net_res_2stream.m,because I only have one gpu,I add opts.gpus = 1 to the cnn_train_dag.m,but I got the error

train: epoch 01:   1/127:Error using  + 
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If
the problem persists, reset the GPU by calling 'gpuDevice(1)'.

Error in dagnn.Sum/forward (line 15)
        outputs{1} = outputs{1} + inputs{k} ;

Error in dagnn.Layer/forwardAdvanced (line 85)
      outputs = obj.forward(inputs, {net.params(par).value}) ;

Error in dagnn.DagNN/eval (line 91)
  obj.layers(l).block.forwardAdvanced(obj.layers(l)) ;

Error in cnn_train_dag>processEpoch (line 223)
      net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;

Error in cnn_train_dag (line 91)
    [net, state] = processEpoch(net, state, params, 'train',opts) ;

Error in train_id_net_res_2stream (line 34)
[net,info] = cnn_train_dag(net, imdb, @getBatch,opts) ;

could please tell me how to solve it? Thankyou

dinggd commented 7 years ago

Your GPU is out of memory, you can try reduce batch size.

Sent from my iPhone

On 23 Sep 2017, at 11:18 AM, sde123 notifications@github.com wrote:

@layumi Thankyou When I run the train_id_net_res_2stream.m,because I only have one gpu,I add opts.gpus = 1 to the cnn_train_dag.m,but I gor the error

train: epoch 01: 1/127:Error using + Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.

Error in dagnn.Sum/forward (line 15) outputs{1} = outputs{1} + inputs{k} ;

Error in dagnn.Layer/forwardAdvanced (line 85) outputs = obj.forward(inputs, {net.params(par).value}) ;

Error in dagnn.DagNN/eval (line 91) obj.layers(l).block.forwardAdvanced(obj.layers(l)) ;

Error in cnn_train_dag>processEpoch (line 223) net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;

Error in cnn_train_dag (line 91) [net, state] = processEpoch(net, state, params, 'train',opts) ;

Error in train_id_net_res_2stream (line 34) [net,info] = cnn_train_dag(net, imdb, @getBatch,opts) ; could please tell me how to solve it? Thankyou

― You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

layumi commented 7 years ago

Thank you @gddingcs net.conserveMemory = true; also helps. (I have turn it on in the code.) So @sde123 you can try to use small batchsize first.