Open hangbomb opened 8 years ago
Did you download the models?
This isn't causing your current problem, but you should also use -print_iter 1
instead of -print_iter -1
.
Thank you for your advice.
Yes, I had downloaded the models before running neural style.
Also I tried -print_iter 1
, but still getting this.
Couldn't load models/VGG_ILSVRC_19_layers.caffemodel
/home/XXX/torch/install/bin/luajit: neural_style.lua:59: attempt to index a nil value
stack traceback:
neural_style.lua:59: in function 'main'
neural_style.lua:437: in main chunk
[C]: in function 'dofile'
XXX/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00406670
Is there any possibility that my laptop is the problem?
I also have this problem. I've downloaded all the model files, and I'm also running on a laptop (with -gpu -1
).
Couldn't load models/VGG_ILSVRC_19_layers.caffemodel
/home/spinda/workbench/torch/install/bin/luajit: neural_style.lua:60: attempt to index a nil value
stack traceback:
neural_style.lua:60: in function 'main'
neural_style.lua:441: in main chunk
[C]: in function 'dofile'
...ench/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
[C]: at 0x00405ea0
It seems that loadcaffe_wrap.load(params.proto_file, params.model_file, params.backend)
returns nil
for some reason.
That error message comes from the depths of loadcaffe library. loadcaffe_wrap calls it in lin 17:
C.loadBinary(handle, prototxt_name, binary_name)
The error message comes from the end of the loadBinary
method, in loadcaffe.cpp, inside loadcaffe library:
void loadBinary(void** handle, const char* prototxt_name, const char* binary_name)
{
caffe::NetParameter* netparam = new caffe::NetParameter();
ReadProtoFromTextFile(prototxt_name, netparam);
bool success = ReadProtoFromBinaryFile(binary_name, netparam);
if(success)
{
std::cout << "Successfully loaded " << binary_name << std::endl;
handle[1] = netparam;
}
else
std::cout << "Couldn't load " << binary_name << std::endl;
}
You probably want to:
(note that personally I'm using hte normalised files. the md5sums are:
$ md5sum models/*
ccbbdda59210208be39f8974f5b5765e models/VGG_ILSVRC_19_layers_deploy.prototxt
6adcfbc93e8f6762e6421515940526f4 models/vgg_normalised.caffemodel
)
I can verify that these md5sums match the model files I'm using.
Ok. i'm on ubuntu 14.04 too. My loadcaffe gitlog is:
$ git log -n 10 --oneline
63fe14e Merge pull request #24 from chirag64/patch-1
a1781e2 Removed unnecessary comma character
6632354 make inn and cunn deps optional
bb2469c Merge pull request #21 from fmassa/pool_pad_fix
a30c883 Fix pooling when pad != 0
51a2ff8 more verbose print in case of unknown module
4762b38 fixed a bug with ceil average max pooling
5b17397 Update README.md
e39087f Merge pull request #17 from szagoruyko/cpu
7aff9ec cpu support
(looks like I'm using a version from around 8th september)
I was using loadcaffe
straight from luarocks install loadcaffe
. Removing that and switching to 63fe14e
from git unfortunately didn't make the problem go away.
A minimal test case producing the issue is:
require 'loadcaffe'
loadcaffe.load('models/VGG_ILSVRC_19_layers_deploy.prototxt','models/VGG_ILSVRC_19_layers.caffemodel','nn-cpu')
Hmmm, you know what, iget the same error :-P And yet... I can use neural-style... perahps there is some gap in my analysis above...
That didn't make any difference, unfortunately. Still getting the same error.
Works ok for me:
require 'loadcaffe'
loadcaffe.load('models/VGG_ILSVRC_19_layers_deploy.prototxt','models/vgg_normalised.caffemodel','nn')
This works for me too:
local ffi = require 'ffi'
require 'loadcaffe'
local C = loadcaffe.C
local backend = 'nn-cpu'
local prototxt_name = 'models/VGG_ILSVRC_19_layers_deploy.prototxt'
local binary_name = 'models/vgg_normalised.caffemodel'
local handle = ffi.new('void*[1]')
-- loads caffe model in memory and keeps handle to it in ffi
local old_val = handle[1]
C.loadBinary(handle, prototxt_name, binary_name)
Ah ha! I was using the default files, rather than the normali(z|s)ed ones. Running neural-style with the custom file options seems to have started things working for me:
th neural_style.lua -gpu -1 -proto_file models/VGG_ILSVRC_19_layers_deploy.prototxt -model_file models/vgg_normalised.caffemodel -style_image style.jpg -content_image content.jpg
Thanks!
Cool :-)
As it turns out, my copy of the model file was corrupted after all (see szagoruyko/loadcaffe#31). Thanks for all the help.
Good catch! Glad you figured it out.
For ease of access, note that the correct md5sum is:
b5c644beabd7cf06bdd9065cfd674c97 VGG_ILSVRC_19_layers.caffemodel
I just had the exact same issue – a corrupt .caffemodel file. Just found this page after i've already uninstalled/reinstalled torch and all the extra packages i've accumulated over the time... Google always leads you to the right results :-)
Hi,
I installed neural-style and have been trying to run
th neural_style.lua -gpu -1 -print_iter -1
on Ubuntu 14.04, but I'm keep getting following error.Is there any solution?