BVLC / caffe

Caffe: a fast open framework for deep learning.
http://caffe.berkeleyvision.org/
Other
34.11k stars 18.7k forks source link

problem in layers configuration for fine-tuning flickr notebook #2802

Closed arasharchor closed 9 years ago

arasharchor commented 9 years ago

Hi

It's been a while that I've started working with cnns and caffe =>newbie! I've experienced a problem which I opened a question in caffe google group

https://groups.google.com/forum/#!topic/caffe-users/AYxtz135gS0

, but I managed to solve it which I guess maybe that's a bug so I decided to put it in github. I'm trying to use fine-tuning notebook provided in the examples folder.

running the exact same notebook I'm getting the error: KeyError: 'loss'

for the code line: train_loss[it] = solver.net.blobs['loss'].data

after investigating the blobs.key using:
solver.net.blobs.keys()

i see the following configuration: ['data', 'label', 'conv1', 'pool1', 'norm1', 'conv2', 'pool2', 'norm2', 'conv3', 'conv4', 'conv5', 'pool5', 'fc6', 'fc7', 'fc8_flickr', '(automatic)']

I see (automatic) after fc7-flickr which is strange because I don't see this layer neither in deploy.prototxt nor train-val.prototxt.

I changed the 'loss' to '(automatic)' and no longer the error , but the results are not the same. I got this result: Accuracy for fine-tuning: 0.250000002235 Accuracy for training from scratch: 0.0399999994785 and iteration outputs:

iter 0, finetune_loss=3.820410, scratch_loss=3.389356 iter 10, finetune_loss=3.713553, scratch_loss=19.500618 iter 20, finetune_loss=3.163069, scratch_loss=2.997965 iter 30, finetune_loss=2.911142, scratch_loss=3.048755 iter 40, finetune_loss=2.649367, scratch_loss=3.131553 iter 50, finetune_loss=2.413907, scratch_loss=3.171020 iter 60, finetune_loss=2.922716, scratch_loss=2.992927 iter 70, finetune_loss=2.501048, scratch_loss=3.010788 iter 80, finetune_loss=1.944367, scratch_loss=3.025702 iter 90, finetune_loss=2.152704, scratch_loss=2.970715 iter 100, finetune_loss=2.071161, scratch_loss=3.039460 iter 110, finetune_loss=1.738475, scratch_loss=2.996681 iter 120, finetune_loss=1.874542, scratch_loss=2.987170 iter 130, finetune_loss=1.809107, scratch_loss=3.016882 iter 140, finetune_loss=2.044318, scratch_loss=2.983100 iter 150, finetune_loss=2.178490, scratch_loss=3.020104 iter 160, finetune_loss=1.958706, scratch_loss=2.972355 iter 170, finetune_loss=1.757214, scratch_loss=2.990436 iter 180, finetune_loss=1.293990, scratch_loss=3.015960 iter 190, finetune_loss=1.825832, scratch_loss=3.031480 done

what is the problem in this case?

best!

arasharchor commented 9 years ago

the problem was solved,

it was caused due to uncompleted downloaded caffe-master which by redownloading and replacing the train_val.prototxt the problem was solved and there is no (automatic) layer in blobs.keys.