Open Kamil-Pissaro opened 8 years ago
What GPU are you using?
Hi! Well, i use this: "Advanced Micro Devices, Inc. [AMD/ATI] RV670 [Radeon HD 3690/3850]" so the gpu core must be RV670, and its only 512 mb. That's why i try to use -gpu -1.
512MB is not enough memory; you'll have to stick with CPU mode.
On Thu, Mar 31, 2016 at 3:15 AM, Kamil-Pissaro notifications@github.com wrote:
Hi! Well, i use this: "Advanced Micro Devices, Inc. [AMD/ATI] RV670 [Radeon HD 3690/3850]" so the gpu core must be RV670, and its only 512 mb. That's why i try to use -gpu -1.
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/jcjohnson/neural-style/issues/193#issuecomment-203865931
I can be wrong, but when I made a gpu-1 flag, does it mean that I already stick with CPU mode? Thank you for answer!
512MB is not enough memory; you'll have to stick with CPU mode.
Yes, -gpu -1
makes it run in CPU mode.
On Thu, Mar 31, 2016 at 8:57 AM, Kamil-Pissaro notifications@github.com wrote:
I can be wrong, but when I made a gpu-1 flag, does it mean that I already stick with CPU mode? Thank you for answer!
512MB is not enough memory; you'll have to stick with CPU mode.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/jcjohnson/neural-style/issues/193#issuecomment-203998954
caezar@caezar-GA-MA770T-UD3:~/neural-style$ th neural_style.lua -gpu -1 -print_iter 1 [libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192 Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel conv1_1: 64 3 3 3 conv1_2: 64 64 3 3 conv2_1: 128 64 3 3 conv2_2: 128 128 3 3 conv3_1: 256 128 3 3 conv3_2: 256 256 3 3 conv3_3: 256 256 3 3 conv3_4: 256 256 3 3 conv4_1: 512 256 3 3 conv4_2: 512 512 3 3 conv4_3: 512 512 3 3 conv4_4: 512 512 3 3 conv5_1: 512 512 3 3 conv5_2: 512 512 3 3 conv5_3: 512 512 3 3 conv5_4: 512 512 3 3 fc6: 1 1 25088 4096 Ошибка сегментирования
same problem
Guys, you need to try install neural-style on x64 OS. It`s help to solve this problem for me.
so neural style does not run on 32bit arch?
I tried yesterday installing torch and neural-style on a Jetson TK1. Torch installed ok and torch.test() was ok except for abs() test. Neural-style however resulted in Segmentation fault when collectgarbage() was run. I verified this by commenting out the explicit calls to collect garbage(), one by one, and finally I was able to run 20+ iterations before the fault.
I guess this is due to 32-bit architecture. I have seen the same fault on a small Ubuntu machine as well as on a Raspberry Pi. On them I ran a small Torch script setting up a VGG16, forwarding an image, reading the output and printing out the labels in clear text. The segfault hit only when exiting the script, again probably when running collectgarbage().
So this is a Torch issue and there is nothing neural-style can do about it as far as I understand. The Torch developers probably know about it but they probably don't care about 32 bit architecture. It excludes using Torch on cheaper embedded hardware though.
So we should open an issue at the repo of creators of torch. I am a pretty poor developer . No money for new hardware guys. @htoyryla are you willing to open a new issue? It s cool that you have reproducable code 👍
I haven't the code at hand any more, that was months ago. Yesterday I ran neural-style as it is.
However, I am not very eager to start driving the issue, it is not a major issue for me and it would require some effort. I anticipate that the Torch people are not so eager to help on this, unless someone finds the solution first.
Hmm :( however neural style should update requirements to 64bit cpu architecture in README.md unless this limitation exist.
For those who wants to run "neural-style" on 32-bit machines: (re)compile Torch with Lua 5.2 instead of LuaJIT, as described here: http://torch.ch/docs/getting-started.html
git clone https://github.com/torch/distro.git ~/torch --recursive
cd ~/torch
bash install-deps
./clean.sh
export TORCH_LUA_VERSION=LUA52
./install.sh
For me, with LuaJIT it always crashes with "segmentation fault" error, but with Lua 5.2 it loads smaller models ("nin_imagenet_conv.caffemodel" works) and gives "$ Torch: not enough memory" errors for larger models (2 Gb on virtual machine seems to be not enough for "VGG_ILSVRC_19_layers.caffemodel").
I'm trying to test the program, but it gives the error: Segmentation fault (core dumped) What could be the problem? elena-hospodin@elenahospodin-System-Product-Name:~/neural-style$ th neural_style.lua -style_image /home/elena-hospodin/neural-style/examples/inputs/picasso_selfport1907.jpg -content_image /home/elena-hospodin/neural-style/examples/inputs/brad_pitt.jpg -gpu -1 -image_size 64 -optimizer adam [libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192 Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel Segmentation fault (core dumped) elena-hospodin@elenahospodin-System-Product-Name:~/neural-style$