Closed skaae closed 5 years ago
Thanks, forgot to make that one public. Should be good now.
There's an example of preprocessing (for VGG_S) here, but your preprocess function looks correct to me. Are you having problems?
Adding something like that to the modelzoo files seems like a good idea to me.
I haven't gotten that far yet. I'll try it out and if it works I'll add a preprocessing function for VGG. Any clue why do they use BGR ?
The model setup is nice and so far very easy to use :)
Apparently it's because Caffe uses OpenCV which uses BGR. We could swap it in the weights, but I think that could confuse people who looked at the Caffe Model Zoo page.
I have a follow up question then. Have all of the models in the model zoo here been trained on BGR or just VGG? Do you think this will remain a standard?
I ran into an issue when trying to unpickle VGG19 normalized https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg19_normalized.pkl in python 3 - no problems in python 2. Is that a compatibility problem with pickle itself?
I ran into an issue when trying to unpickle VGG19 normalized
What issue? What's the output you're getting? Probably best to add a separate ticket, too.
In the MNIST example there is some trickery to make the dataset (pickle file) load properly in Python 3. I guess it's because Python 3 assumes utf-8 encoding unless otherwise instructed: https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py#L34-L45 Maybe the issue you're seeing is related to that.
Here were my attempts, before I decided to use numpy's save and load in order to get the data into python 3.
values = pickle.load(open('vgg19_normalized.pkl'))['param values']
output:
UnicodeDecodeError Traceback (most recent call last)
<ipython-input> in <module>()
----> 1 values = pickle.load(open('vgg19_normalized.pkl'))['param values']
/home/python3/python3_install/lib/python3.4/codecs.py in decode(self, input, final)
317 # decode input (taking the buffer into account)
318 data = self.buffer + input
--> 319 (result, consumed) = self._buffer_decode(data, self.errors, final)
320 # keep undecoded input until the next call
321 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
using the mnist trickery:
values = pickle.load(open('vgg19_normalized.pkl', encoding='latin-1'))['param values']
output:
TypeError Traceback (most recent call last)
<ipython-input> in <module>()
----> 1 values = pickle.load(open('vgg19_normalized.pkl', encoding='latin-1'))['param values']
TypeError: 'str' does not support the buffer interface
Reading as bytes:
values = pickle.load(open('vgg19_normalized.pkl', 'rb'))['param values']
output:
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-10> in <module>()
----> 1 values = pickle.load(open('vgg19_normalized.pkl', 'rb'))['param values']
UnicodeDecodeError: 'ascii' codec can't decode byte 0xbc in position 1: ordinal not in range(128)
Am I doing something wrong?
Am I doing something wrong?
Yes, you didn't correctly copy the MNIST trickery. It should be pickle.load(open('vgg19_normalized.pkl'), encoding='latin-1')['param values']
, not pickle.load(open('vgg19_normalized.pkl', encoding='latin-1'))['param values']
. The full and truthful MNIST trickery would even be:
with open('vgg19_normalized.pkl', 'rb') as f:
values = pickle.load(f, encoding='latin-1')['param values']
I.e., pass 'rb'
to open
, and encoding='latin1'
to pickle.load
.
In the discussion of #21, we've pondered converting the models to numpy's .npz
format. That would avoid the compatibility and security issues around pickles (and maybe even reduce download traffic when saved with np.savez_compressed()
).
Could/should we provide a layer for converting from RGB to (bhwc) to BGR (bchw) similar to
https://github.com/nicholas-leonard/dpnn/blob/master/Convert.lua
@ebenolson any ideas on how could we use a pre-trained ImageNet model to fine-tune a dataset of images with 4-channels? (RGB + NIR)
Would using PCA to transform the 4-channels into 3 would work? Are there any other ideas?
Would using PCA to transform the 4-channels into 3 would work?
I doubt so. The PCA components would probably be too different from RGB.
Are there any other ideas?
You could try extending the first-layer filter tensor to have 4 input channels, initialize the additional filters randomly with a carefully-chosen scale and then slowly train. If you fear this would destroy existing weights or not give the extra information enough consideration, you could try to add a separate branch and fuse the additional infrared information later. Good luck!
Hi! I'm trying the ImageNet VGG16 example in Recipes, and I get the same error as henridwyer
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-23-ac5b1f011bed> in <module>()
1 import pickle
2
----> 3 model = pickle.load(open('vgg_cnn_s.pkl'), encoding='latin-1')
4 CLASSES = model['synset words']
5 MEAN_IMAGE = model['mean image']
/usr/lib/python3.6/codecs.py in decode(self, input, final)
319 # decode input (taking the buffer into account)
320 data = self.buffer + input
--> 321 (result, consumed) = self._buffer_decode(data, self.errors, final)
322 # keep undecoded input until the next call
323 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
This happens both with and without the encoding='latin-1'
parameter to pickle.load()
. Am I doing something wrong? Can you also please post the correct md5sum to the pickle-file such that I can rule out any problems with the download? Thanks!
for py3, you need to use:
model = pickle.load(open('vgg_cnn_s.pkl', 'rb'), encoding='latin-1')
just in case, md5sum vgg_cnn_s.pkl
= ed689def9ce11256d2faf3fa62371750
Thank you, Eben! That solved it.
https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg19.pkl sorry to ask again, how to login and download the vgg16.pkl and vgg19.pkl models?
They seem to be gone, I also need them to run a third-party network. Does anybody have a copy? EDIT: found them at Academic Torrents: http://academictorrents.com/details/854efbd8e2c085e8e0e5fb2d254ed0e21da6008e
Hi! I am facing an error with this "wget https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg19.pkl". The link isn't working anymore.
The link isn't working anymore.
Yes, unfortunately it was getting too costly, see #115. The post before links to academictorrents.com, does that work for you? We're happy to have them hosted somewhere else and update the URLs, if you know a good solution. (Zenodo?)
The link isn't working anymore.
Yes, unfortunately it was getting too costly, see #115. The post before links to academictorrents.com, does that work for you? We're happy to have them hosted somewhere else and update the URLs, if you know a good solution. (Zenodo?)
Hi! I downloaded vgg19_normalised.pkl from an alternative link, but I getting this error : "mismatch: got 32 values to set 38 parameters"
@priyanshuvarsh Did you find any fix for your problem ? I am getting same error as you now. Can you please tell how to fix this ?
@davidtellez - Using that academic torrent link gives error as pointed out by @priyanshuvarsh . Did you get any such error. If possible can you share your vgg19 pickle file. I need it urgently for my project.
I'm trying to use the VGG-16 net with pretrained weights.
The link https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg16.pkl does not seem to be public?
@ebenolson : I can download the file if I log in with the information you gave me.
I'm not sure how i should pre-process my data to make the model work. I looked at the preprocessing description in the repo:
I guess i should do something like (not tested):
Maybe we should at preprocess functions to the modelzoo?