jcjohnson / neural-style

Torch implementation of neural style algorithm
MIT License
18.31k stars 2.7k forks source link

Problem. Not working. #249

Open codesoon opened 8 years ago

codesoon commented 8 years ago

DigitalOcean: Ubuntu Server 16.04 x64, 512 MB/1 CPU 20 GB SSD Disk.

Using installation guid https://github.com/jcjohnson/neural-style/blob/master/INSTALL.md

th neural_style.lua -gpu -1 -print_iter 1 [libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192 Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel /home/neural/torch/install/bin/luajit: C++ exception

th neural_style.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg /home/neural/torch/install/bin/luajit: /home/neural/torch/install/share/lua/5.1/trepl/init.lua:384: module 'cutorch' not found:No LuaRocks module found for cutorch no field package.preload['cutorch'] no file '/home/neural/.luarocks/share/lua/5.1/cutorch.lua' no file '/home/neural/.luarocks/share/lua/5.1/cutorch/init.lua' no file '/home/neural/torch/install/share/lua/5.1/cutorch.lua' no file '/home/neural/torch/install/share/lua/5.1/cutorch/init.lua' no file './cutorch.lua' no file '/home/neural/torch/install/share/luajit-2.1.0-beta1/cutorch.lua' no file '/usr/local/share/lua/5.1/cutorch.lua' no file '/usr/local/share/lua/5.1/cutorch/init.lua' no file '/home/neural/.luarocks/lib/lua/5.1/cutorch.so' no file '/home/neural/torch/install/lib/lua/5.1/cutorch.so' no file '/home/neural/torch/install/lib/cutorch.so' no file './cutorch.so' no file '/usr/local/lib/lua/5.1/cutorch.so' no file '/usr/local/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'error' /home/neural/torch/install/share/lua/5.1/trepl/init.lua:384: in function 'require' neural_style.lua:51: in function 'main' neural_style.lua:515: in main chunk [C]: in function 'dofile' ...ural/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x00406670

th neural_style.lua -gpu -1 -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg [libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192 Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel /home/neural/torch/install/bin/luajit: C++ exception

Runescaped commented 8 years ago

I've never seen that c++ exception, but the answer regarding the second command can be found in the FAQ, namely:

"Problem: Running without a GPU gives an error message complaining about cutorch not found

Solution: Pass the flag -gpu -1 when running in CPU-only mode"

EDIT: Looks like someone had a very similar issue...

erickrawczyk commented 8 years ago

the C++ exception happens when you run out of memory. you'll need wayyy more than a 512mb droplet. you can also pass the -optimizer adam flag.

codesoon commented 8 years ago

Ok. I am update droplet to 1024 RAM(+ swap 512).

th neural_style.lua -gpu -1 -model_file models/vgg_normalised.caffemodel -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg

`Successfully loaded models/vgg_normalised.caffemodel

conv1_1: 64 3 3 3 conv1_2: 64 64 3 3 conv2_1: 128 64 3 3 conv2_2: 128 128 3 3 conv3_1: 256 128 3 3 conv3_2: 256 256 3 3 conv3_3: 256 256 3 3 conv3_4: 256 256 3 3 conv4_1: 512 256 3 3 conv4_2: 512 512 3 3 conv4_3: 512 512 3 3 conv4_4: 512 512 3 3 conv5_1: 512 512 3 3 conv5_2: 512 512 3 3 conv5_3: 512 512 3 3 conv5_4: 512 512 3 3 Setting up style layer 2 : relu1_1 Setting up style layer 7 : relu2_1 Setting up style layer 12 : relu3_1 Killed ` th neural_style.lua -gpu -1 -model_file models/vgg_normalised.caffemodel -optimizer adam -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg

`Successfully loaded models/vgg_normalised.caffemodel

conv1_1: 64 3 3 3 conv1_2: 64 64 3 3 conv2_1: 128 64 3 3 conv2_2: 128 128 3 3 conv3_1: 256 128 3 3 conv3_2: 256 256 3 3 conv3_3: 256 256 3 3 conv3_4: 256 256 3 3 conv4_1: 512 256 3 3 conv4_2: 512 512 3 3 conv4_3: 512 512 3 3 conv4_4: 512 512 3 3 conv5_1: 512 512 3 3 conv5_2: 512 512 3 3 conv5_3: 512 512 3 3 conv5_4: 512 512 3 3 Setting up style layer 2 : relu1_1 Setting up style layer 7 : relu2_1 Setting up style layer 12 : relu3_1 Setting up style layer 21 : relu4_1 Killed ` What more can be done?

erickrawczyk commented 8 years ago

a gig still might not be enough, but honestly I'm not sure. I'm working on a deploy script (neural-style-droplet) , and I usually run it on a 16gb.

codesoon commented 8 years ago

I made the swap to 4 gigabytes. Script working. The script works, but for a long time. I did not wait for the end of the work.

erickrawczyk commented 8 years ago

Yeah, the lack of GPU's are the biggest downside to using DO droplets. Even with 16gb, it takes several minutes to complete.

I'm considering trying AWS GPU Instances to speed up processing.

codesoon commented 8 years ago

Ok. Thanks. I go to Amazon.