Open M4lchik opened 7 years ago
I think it consumes a lot of GPU memory. I had the same issue on Geforce GTX 980. I managed to "fix" simply by resizing the images (300x227 works for me).
Well, I heard it consumes 3Go of GPU at start, and my computer should be able handle 4Go of GPU, I think. Also there's no point to process it for such tiny pictures... at least for me (I want to use it in a graphic purpose)
Hello everyone, can Someone help enlighten myself on coding basics, as I'm a new partner. Materials are limited but I will try with scholar. Just want to give back and contribute.
@M4lchik You will need to decrease the image size, the 700px images are by no means "tiny" for this kind of processing, it takes a lot of memory to load the images to the GPU in a format which torch can process.
I can generate up to 600px images with a 6 GB card and that uses ~3,5 GB VRAM, the OS needs 0,5 - 1,5 GB in the background. So with 4 GB you should get up to ~500px (if everything in the background is closed).
If you really want to use it for graphic purposes (i'm a graphic designer BTW 🙃) you will either have to: 1) Get a better GPU – like the GTX 1080 Ti with 11 GB VRAM (should be enough for >2k images) 2) Use the resizing trick i've mentioned here 3) Use a cloud service to get access to a machine with 12 or 16 GB.
I would also use this fork, it also allows for bigger image sizes (but don't expect too much!) – martinbenson has reworked the code quite a bit: https://github.com/martinbenson/deep-photo-styletransfer
It also gets updates nearly every day with added features (just look at the commits), so be sure to follow it!
That works! At least, the neural network thing. With a 350px picture... I can't definitively afford a $800 GPU, so I would go to the second option instead, thanks! I don't quite understand the third one. I mean that can't be free, right?
As for Martin Benson, I would rather use the original one, simply because it's been one week I tried everyday to make it work, that would be sad all those efforts for nothing!
Also, with the second process, I had a error with .mat - .csv, so, reading this answer, I tried to convert, but the second process hits, once again, an "out of memory" issue, too tired to think about this for now.
But you should definitely try out bensons fork – it doesn't hurt to have a second option available.
The usage of the Amazon servers is of course not for free (except testing lower end-machines). I haven't tried out renting a GPU-instance yet, my account still has to be activated. But as soon as i get the confirmation from Amazon i will try to install the docker version from Martin Benson on a machine with a 12 GB GPU. I wonder how long it takes on a high-end machine to generate larger images (every iteration must take way longer – and then multiply that with 2 x 1000 iterations ...). And how much it will cost :-).
The first .lua script needs less VRAM compared to the second one. For example: I can create images with 650px with the neural style code, but i get the "out of memory" with the second one. When i reduce the size to 550 – 600px the second script also completes. But while it runs i only have 300 – 400 MB VRAM left.
Do you have a program to monitor your free GPU memory? Unfortunately i have no idea how Ubuntu handles this. BUT compared to OSX you can use Nvidias nvidia-smi
tool to check your GPU stats:
https://developer.nvidia.com/nvidia-system-management-interface
http://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf
It should have been installed along with Nvidias CUDA Toolkit. Otherwise you can download it from Nvidia: https://developer.nvidia.com/cuda-downloads
This one looks easier: https://github.com/wookayin/gpustat (but also needs nvidia-smi
!)
Just install with sudo pip install gpustat
and then check your memory with gpustat
.
Try checking how much of your 4 GB VRAM you can effectively use – i'm sure the OS needs a large chunk for managing the main screen & graphics processes.
And then run watch --color -n1.0 gpustat
while processing the .lua scripts and take note how much VRAM is available. Hope that helps!
@M4lchik @Parapsnick @subzerofun
I have a solution for lowering the GPU usage by up to 2GB (From my testing, so even less memory may be consumed at higher image sizes in comparison), which will in turn will allow for the creation of larger images.
My proposed solution can be found here: https://github.com/martinbenson/deep-photo-styletransfer/issues/22
martinbenson's fork now uses the latest version of Neural-Style, which is far more memory efficient (see the graph posted in my comment above this one): https://github.com/martinbenson/deep-photo-styletransfer
On top of that, the Multi-GPU related parameters are now usable as well.
@M4lchik Hi, I met the same problem as out of memory. I am using ubuntu 16.04 with GTX 1080. You have mentioned that resizing the images works well. Could please tell me where to configure in luanfujun's source code? Or just have to add code by myself?
"4 GB of GDDR5 memory" should mean I can support the 3 GB of GPU needed, right? If not, is there a way to decrease the GPU needed?
I think my drivers etc. are up to date, and I already did a lot of processes to fix bugs, like the library path, the version of CUDA... And I used the optional -backend cudnn and -cudnn_autotune.
Also: each time I restart my terminal, I have to write _export CUDNNPATH=/usr/local/cuda/lib64/libcudnn.so.5 , I'm new to Ubuntu, does someone know how to set this path once and for all?
And, unlike this comment suggested, I don't run Lua 5.2 but a previous version (5.1, I think).
Might be double thread, I'm ready to merge it with the thread above if necessary.