Open DenisSergeevitch opened 7 years ago
If no one picks this up, I am willing to spend some time over the weekend and dockerise it :)
There's one in my fork:
https://github.com/martinbenson/deep-photo-styletransfer/blob/master/docker/dockerfile
Hey @martinbenson I think your Dockerfile needs sudo
installed at step 3, ie:
RUN apt-get update && apt-get install --assume-yes git libprotobuf-dev libopenblas-dev liblapack-dev protobuf-compiler wget python3-pip sudo
I've tried Dockerising on my fork too but it's a shoddy attempt so I sent you a PR
@martinbenson Thanks for the docker file! I know this maybe isn't the right place to ask for this, but could you add a link for a tutorial on how to install your docker file on AWS?
I'm not that familiar with Linux so i thought a docker install would be the easiest solution to get this repo running on AWS. I want to try it on a p2.xlarge instance instance:
Instance Name | GPU Count | vCPU Count | Memory | Parallel Processing Cores | GPU Memory | Network Performance |
---|---|---|---|---|---|---|
p2.xlarge | 1 | 4 | 61 GiB | 2,496 | 12 GiB | High |
https://aws.amazon.com/de/blogs/aws/new-p2-instance-type-for-amazon-ec2-up-to-16-gpus/
My GTX 780 has 6GB VRAM and the highest resolution i can get is 550 x ~ 400px, this needs around 4,5 GB free VRAM.
So having 12 GB available on the p2.xlarge instance would be really great to work with!
I guess it's way more complicated to install this repo (and all the dependencies...) on AWS without a ready for deployment-docker file.
@subzerofun To be honest, moving to 12GB wouldn't help because there's currently a 2GB bottleneck in luaJIT that prevents using larger files.We need to address that before it would be worthwhile.
Installing the dockerfile on AWS is basically the same as locally: install docker, install nvidia drivers, install nvidia-docker, build the image from the dockerfile, start a container from the image. I don't have time to write up more detail than that though, sorry. Googling will turn up loads of tutorials for each step I'm sure, or maybe someone else can help.
OK thanks for the info!
Oh, i meant to say: the luaJIT issue is described here https://github.com/martinbenson/deep-photo-styletransfer/issues/3 if anyone has chance to look into it.
Didn't know i'm not the only one with issues when trying to process larger images. I thought that 6GB VRAM are simply not enough. The error i get when i try to process images > 550px is THCStorage.cu (...) : out of memory
.
It's unfortunate to hear that more VRAM would maybe resolve the out of memory
problem, but then lead to another issue...
So just that i understand this correctly: Even if i had more VRAM i would still run into issues because of this luaJIT problem?
I've read the thread and saw the suggestion that using lua 5.2 would maybe fix the problem – but the issue is still ongoing, isn't it?
So i guess i have to settle with 550px images now. Or maybe not? :-)
@luanfujun mentioned here that there is an upscaling trick for the final result images – i don't know if you have seen it yet. Here is an example of his upscaled images (3500 x 2340px !):
https://cloud.githubusercontent.com/assets/3760952/24328667/43702470-11bd-11e7-9b69-9eb771e470ea.png https://cloud.githubusercontent.com/assets/3760952/24328652/efa2ce56-11bc-11e7-96e5-ca8d75c7261e.png
I didn't ask him about the solution, but i think it's mentioned here: https://github.com/jcjohnson/neural-style/wiki/Techniques-For-Increasing-Image-Quality-Without-Buying-a-Better-GPU
Specifically the method "NIN Upres": https://github.com/jcjohnson/neural-style/wiki/Techniques-For-Increasing-Image-Quality-Without-Buying-a-Better-GPU
Example: https://imgur.com/a/ALzL7 (Re-sized from 512x384 to 2500x1875 with Neural-Style)
I will try this method with the first example image myself, at the first glance the quality looks excellent! Of course generating larger images with the original code would probably produce much more detailed results (especially the last step with the laplacian matrix correction).
But the NIN Upres method looks like a good intermediate solution!
@subzerofun The memory bottleneck is now fixed in https://github.com/martinbenson/deep-photo-styletransfer. I can run at 700 width with 6GB cards - haven't tried bigger than that yet.
Cool! Just looked at the commits and saw that you've added quite a bit to the .lua files.
Is there something i need to be careful about - a specific lua/luajit version before i run them? Or is a standard torch installation OK?
Even multiprocessing for the matlab files – your version really looks great!
@subzerofun It's a pretty standard install. Using the dockerfile is the easiest way to ensure everything works OK (including cuda, cudnn setup etc).
I already have CUDA 8 + cudnn 5.1 installed, so it should work without the docker install, right?
It would be pretty weird if your fork wouldn't work, since it has the same dependencies (+cudnn) as the original repo.
BTW: I just saw that Nvidia has released a new version of cudnn (6.0) a few days ago: https://developer.nvidia.com/rdp/cudnn-download
Download cuDNN v6.0 (March 23, 2017), for CUDA 8.0 Download cuDNN v6.0 (March 23, 2017), for CUDA 7.5
It's entirely possible that updating cudnn would break something. Best to just build the docker image. If you do just install manually and from this repo you'll at least need to update the makefile to cuda 8 (already bumped in my fork).
Sorry, my posting was maybe misleading: I just read about the new cudnn version by accident – i don't plan to update it, because it would probably break dozens of other projects that i've installed.
I'll just try to run the files without docker and if i run into trouble i'll use your docker file.
+1
This commit on Neural-Style, may help with allowing for larger image sizes: https://github.com/jcjohnson/neural-style/commit/ea75cbc5ba196055c0ac2fe5d9960efe33cbbf05
Someone who successfully runned an algorithm, please share a docker container image?