Open ProGamerGov opened 8 years ago
I had this same error, it's trying to use more GPU memory than you have available. You should be able to fix this by either lowering the size of the image, or potentially by installing and using cudnn which seems to be a slightly more memory efficient backend.
I have signed up for Nvidia's cudnn, now I have to wait for it to hopefully be accepted. My hope was to be able to use my GPU to process images at higher quality than my CPU, which can do images at a size of up to around 960.
I see. Yeah, GPU will process much faster but you're limited by your GPU's memory, which is likely going to be significantly less than your system memory.
As far as Nvidia's developer program, I was accepted in less than 12 hours, so you should be fine.
I am using a GTX 660. Not sure if it's capable of creating images with higher quality and faster than my i7 CPU.
My 740M is significantly faster than my quad core i7, for what that's worth, but those are both mobile chipsets.
Can I install cunn via the link provided on this guide here? http://christopher5106.github.io/big/data/2015/07/16/deep-learning-install-caffe-cudnn-cuda-for-digits-python-on-ubuntu-14-04.html
https://s3-eu-west-1.amazonaws.com/christopherbourez/public/cudnn-6.5-linux-x64-v2.tgz
Or do i need to wait for my developer program application to be accepted?
I don't know, I haven't tried that link. I will say I was using cudnn 7.0, but not sure how important that is or isn't.
The install.MD file on this github project talks about cuDNN 6.5: https://github.com/jcjohnson/neural-style/blob/master/INSTALL.md
Can I update to cuDNN 7.0 from cuDNN 6.5 once I have received access to cuDNN 7.0?
Ah yeah you're totally right, I missed that. I'm not sure if you can upgrade or not, my bet would be you can just replace those files in /usr/local/cuda and then uninstall/reinstall through luarocks. But I don't know for sure.
Doing some research on installing and upgrading cuDNN:
Install Cuda 7.0 and Cudnn 6.5, in additional to your existing Cuda 7.5 and Cudnn 7.0 libraries. They are the officially supported versions. And it is okay to have multiple Cuda toolkits with different paths on your system. This is the recommended way.
https://github.com/tensorflow/tensorflow/issues/54
Though your running the latest 7.0 cuDNN without any issues? As I have had to setup my dual-booting WIndows and Ubuntu, far too many times already due to errors, I don't think I should risk messing anything up then. On the flip side, I am impatient and it would only take a few hours to fix a screw up.
I had downloaded the most recent version of cudnn from nvidia's website - I believe I have both cuda and cudnn 7.0
@jdrusso How did you install cuDNN v4? over in https://github.com/jcjohnson/neural-style/issues/154, I am receiving errors about not being able to find the files:
user@user-XPS-8500:~/neural-style$ th neural_style.lua -gpu 0 -backend cudnn
nil
/home/user/torch/install/bin/luajit: /home/user/torch/install/share/lua/5.1/trepl/init.lua:384: /home/user/torch/install/share/lua/5.1/trepl/init.lua:384: /home/user/torch/install/share/lua/5.1/cudnn/ffi.lua:1279: 'libcudnn (R4) not found in library path.
Please install CuDNN from https://developer.nvidia.com/cuDNN
Then make sure files named as libcudnn.so.4 or libcudnn.4.dylib are placed in your library load path (for example /usr/local/lib , or manually add a path to LD_LIBRARY_PATH)
Hello,
Whatever setup I use, I have the same problem (on a iMac 3,4GHz Intel Core i5 / 16Go DDR3 / NVIDIA GeForce GTX 775M 2Go) : th neural_style.lua -gpu 0 -backend cudnn th neural_style.lua -style_image style.jpg -content_image portrait.jpg th neural_style.lua -style_image style.jpg -content_image portrait.jpg -gpu 0 th neural_style.lua -style_image style.jpg -content_image portrait.jpg -image_size 256 th neural_style.lua -style_image style.jpg -content_image portrait.jpg -gpu 0 -backend cudnn th neural_style.lua -style_image style.jpg -content_image portrait.jpg -gpu 0 -backend cudnn -image_size 128 th neural_style.lua -style_image style.jpg -content_image portrait.jpg -gpu 0 -backend cudnn -image_size 64 th neural_style.lua -style_image style.jpg -content_image portrait.jpg -image_size 64 th neural_style.lua -style_image style.jpg -content_image portrait.jpg -backend cudnn -image_size 64 th neural_style.lua -style_image style.jpg -content_image portrait.jpg -backend cudnn -image_size 64 th neural_style.lua -print_iter 1 -optimizer adam -normalize_gradients -learning_rate 1e1 th neural_style.lua -gpu 0 -print_iter 1 th neural_style.lua -gpu 0 -print_iter 1 -backend cudnn
All those commands leads to :
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192 Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel conv1_1: 64 3 3 3 conv1_2: 64 64 3 3 conv2_1: 128 64 3 3 conv2_2: 128 128 3 3 conv3_1: 256 128 3 3 conv3_2: 256 256 3 3 conv3_3: 256 256 3 3 conv3_4: 256 256 3 3 conv4_1: 512 256 3 3 conv4_2: 512 512 3 3 conv4_3: 512 512 3 3 conv4_4: 512 512 3 3 conv5_1: 512 512 3 3 conv5_2: 512 512 3 3 conv5_3: 512 512 3 3 conv5_4: 512 512 3 3 fc6: 1 1 25088 4096 fc7: 1 1 4096 4096 fc8: 1 1 4096 1000 THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-4411/cutorch/lib/THC/generic/THCStorage.cu line=40 error=2 : out of memory /Users/olivier/torch/install/bin/luajit: /Users/olivier/torch/install/share/lua/5.1/nn/utils.lua:11: cuda runtime error (2) : out of memory at /tmp/luarocks_cutorch-scm-1-4411/cutorch/lib/THC/generic/THCStorage.cu:40 stack traceback: [C]: in function 'resize' /Users/olivier/torch/install/share/lua/5.1/nn/utils.lua:11: in function 'torch_Storage_type' /Users/olivier/torch/install/share/lua/5.1/nn/utils.lua:57: in function 'recursiveType' /Users/olivier/torch/install/share/lua/5.1/nn/Module.lua:123: in function 'type' /Users/olivier/torch/install/share/lua/5.1/nn/utils.lua:45: in function 'recursiveType' /Users/olivier/torch/install/share/lua/5.1/nn/utils.lua:41: in function 'recursiveType' /Users/olivier/torch/install/share/lua/5.1/nn/Module.lua:123: in function 'cuda' ...vier/torch/install/share/lua/5.1/loadcaffe/loadcaffe.lua:46: in function 'load' neural_style.lua:73: in function 'main' neural_style.lua:500: in main chunk [C]: in function 'dofile' ...vier/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk [C]: at 0x010820ebd0
I am quite surprised by that even with a image size of 64, it leads to a Out Of Memory.
@ANteiKA I'm getting exact error, did you found any workarounds?
Still the same error... I wonder if this can run on such an iMac :( So I am currently trying to use cloud GPU (rescale), but I am still at the very first steps (learning their cloud solution...)
same error here with nvidia gtx 960
Today I tested it on a k20m and I get out of memory as soon as I set a size higher then the default one. running with cudnn
In case you have not seen this (posted yesterday under another issue). Here's my results how much memory is needed for various image sizes and configurations using CPU:
I made quick tests on smaller image sizes (CPU only), with the following results. Note that I did not wait through all the iterations, only to see that the increase settled down. Using VGG19, default images, default layers image size: resident memory 320 1.7G (peaks at 2.8G during startup) 480 3.3G 640 5.7G 720 7.1G 800 8.8G 960 12G Testing at 800px, changing style scale to 2: 8.8 => 20G Testing at 800px, dropping all but one style layer: 8.8G => 7.6G Testing with VGG16 (yes, sixteen) with FC layers removed, using the default layers 800 px 7.9G (from 8.8G with VGG19) Testing with nin-imagenet-conv, 1 content layer and 3 style layers 800 px 1.8G 640 px 1.2G Don’t know how these CPU based results correlate with GPU, but from this it appears that with VGG19 running with 2G RAM would fail because of the memory peak during initialization. Running with nin-imagenet should be possible, there seems to be no memory peak but rather a steady increase up to the values given. But nin-imagenet-conv is no direct replacement, it produces different-looking results and requires tweaking the settings. These results are all using L-BFGS. I found (to my surprise) that the effect of ADAM regarding memory usage was quite small. It seems to me that the smaller networks with L-BFGS work better with lower memory usage than using VGG19 with ADAM. Hannu http://liipetti.net/erratic \ As far as I could see, the memory reduction was not so much due to the removal of the FC layers but because VGG16 has fewer convolutional layers.
2016-03-18 13:09 GMT+02:00 Giacomo Dabisias notifications@github.com:
Today I tested it on a k20m and I get out of memory as soon as I set a size higher then the default one. running with cudnn
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/jcjohnson/neural-style/issues/150#issuecomment-198309596
The issue is different to another where size of image matter. I've tried set image_size to very low values like 10 and got same error. I'm even can't execute 'test' command from manual: th neural_style.lua -gpu 0 -print_iter 1 In all cases error is same as in @ANteiKA comment Sorry for my bad english :(
Dmitry Chirkin notifications@github.com kirjoitti 18.3.2016 kello 13.31:
The issue is different to another where size of image matter. I've tried set image_size to very low values like 10 and got same error.
If you look at my memory test results you’ll find that there is no point in going to very low memory sizes because there is a memory usage peak around 3G at the beginning anyway, probably due to the initialization of the neural network.
So you will need at least 3G memory to run even with the lowest image sizes.
Have you tried running with CPU? Alternatively, you could try using nin-imagenet-conv which runs with much smaller memory footprint.
Hannu
Dmitry Chirkin notifications@github.com kirjoitti 18.3.2016 kello 13.31:
The issue is different to another where size of image matter. I've tried set image_size to very low values like 10 and got same error.
Here the command:
th neural_style.lua -gpu -1 -print_iter 1 -image_size 10
gives the error (which actually makes sense as the image would be smaller that a kernel window used to slide over the image):
In 34 module of nn.Sequential: /home/hannu/torch/install/share/lua/5.1/nn/THNN.lua:109: bad argument #2 to 'v' (input image smaller than kernel size at /home/hannu/torch/extra/nn/lib/THNN/generic/SpatialMaxPooling.c:106)
Using image_size 50 works:
th neural_style.lua -gpu -1 -print_iter 1 -image_size 50
and shows the following memory usage (with CPU):
9895 hannu 20 0 3865080 2,091g 9708 R 56,7 8,9 0:01.71 luajit
9895 hannu 20 0 861048 350856 11220 R 706,6 1,4 0:22.94 luajit
9895 hannu 20 0 861048 350856 11220 R 796,1 1,4 0:46.88 luajit
9895 hannu 20 0 861048 351148 11464 R 797,4 1,4 1:10.84 luajit
9895 hannu 20 0 861048 351148 11464 R 793,3 1,4 1:34.67 luajit
i.e. there is an inital memory peak a little above 2G even at the smallest possible image sizes. So it seems that using neural-style with VGG19 with only 2GB memory is hopeless. The alternatives are running on the CPU (assuming there is more memory there) or switching to nin-imagenet-conv instead of VGG19.
Hannu
For anyone struggling with problems caused by memory limitations, I have made a short guide how to start using nin-imagenet-conv. See here http://liipetti.net/erratic/2016/03/21/using-nin-imagenet-conv-in-neural-style/
Thanks a lot for your guide ! I will have a try with nin imagenet, to compare results with vgg19. I've made it work on CPU now, and even if it slow (20min for 10 iterations for a 1024 picture...), it works ! :)
olivier nerot notifications@github.com kirjoitti 22.3.2016 kello 1.02:
Thanks a lot for your guide ! I will have a try with nin imagenet, to compare results with vgg19. I've made it work on CPU now, and even if it slow (20min for 10 iterations for a 1024 picture...), it works ! :)
How much RAM do you have? On my i7, 8 cores, 24GB RAM, one iteration (VGG19, 1024px) takes about 10 seconds but it really needs most of the RAM.
Hannu
I had this working on some images last week, but I've gone to use it again and now I get this error too, but I have 4GB of memory on my 980M. I don't understand at all.
Edit: htoyryla's guide worked, but I don't understand why I needed it
Just tried
th ~/neural-style/neural_style.lua -gpu -0 -original_colors 1 -print_iter 100 -save_iter 100 -num_iterations 1000 -model_file ~/neural-style/models/nin_imagenet_conv.caffemodel -proto_file ~/neural-style/models/train_val.prototxt -content_layers relu7 -style_layers relu1,relu3,relu5,relu7,relu9 -image_size 512 -style_image /home/stephan/neural-style/examples/inputs/shipwreck.jpg -content_image /home/stephan/Bilder/test.jpg
If I increase image size I get out of memory error. Any chance to work around òut of memory
and produce larger results?
nvidia-smi
Sometimes a process may occupy too much GPU memory, you can try to kill this process. I have encountered /usr/lib/xorg/Xorg
occupies around 1800 Mb GPU memory.
I'm encountering this on Mac OS with a GTX 980 Ti (12GB of VRAM), so I definitely have enough memory. I've tried using the CPU, and that works, but GPU still gives me this memory error. I'm using a Hackintosh, if that matters