leehomyc / Faster-High-Res-Neural-Inpainting

High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis
http://www.harryyang.org/inpainting
MIT License
1.3k stars 213 forks source link

can i run the model without gpu #5

Closed NerminSalem closed 7 years ago

NerminSalem commented 7 years ago

hi can i use cpu instead of gpu for th run_content_network.lua

thanks

NerminSalem commented 7 years ago

@leehomyc can u help?

leehomyc commented 7 years ago

Hi,

I have not tried using CPU but I assume it will be slower. Have you tried with CPU?

NerminSalem commented 7 years ago

hi, thanks for replying screenshot from 2017-04-17 23-31-45

NerminSalem commented 7 years ago

how can i disable using GPU?

coderTazz commented 7 years ago

Hello,

To port to cpu you can put gpu=0 as options and then in the code write a section where you will need to set the tensor and network type to float. Something like this:

if opt.gpu==0 then
      sampletensor:float();
      net:float();
else
      sampletensor:cuda()
      net:cuda()
end

There can be some more details in this implementation that I missed. I hope you get the gist.

Hope I could help.

NerminSalem commented 7 years ago

ok i got it but now i got it i m getting that error libthclnn_searchpath /home/nermin/torch-cl/install/lib/lua/5.1/libTHCLNN.so terminate called after throwing an instance of 'std::runtime_error' while running th run_texture_optimization.lua i m using cpu mode as my laptop doesn't support NIVIDIA and i installed cltorch and clnn and set gpu= -1 can u help?

coderTazz commented 7 years ago

These are the last four listed options in run_texture_optimization.lua.

-- mode: speed or memory. Try 'speed' if you have a GPU with more than 4GB memory, and try 'memory' otherwise. The 'speed' mode is significantly faster (especially for synthesizing high resolutions) at the cost of higher GPU memory.
-- gpu_chunck_size_1: Size of chunks to split feature maps along the channel dimension. This is to save memory when normalizing the matching score in mrf layers. Use large value if you have large gpu memory. As reference we use 256 for Titan X, and 32 for Geforce GT750M 2G.
-- gpu_chunck_size_2: Size of chuncks to split feature maps along the y dimension. This is to save memory when normalizing the matching score in mrf layers. Use large value if you have large gpu memory. As reference we use 16 for Titan X, and 2 for Geforce GT750M 2G.
-- backend: Use 'cudnn' for CUDA-enabled GPUs or 'clnn' for OpenCL.

If you do not have GPU, put the mode as memory instead of speed. Also, now that I see this, I am not sure this code has been written with an option for CPU as listed in the last option above(backend). However, if you can use OpenCL, keep the backend option as clnn. Have you tried these two things?

NerminSalem commented 7 years ago

hello, thanks for replying i did what you mentioned and turned out that i have a problem in opencl i don't have nividia but i have a amd radeon

coderTazz commented 7 years ago

No, on Radeon it will not run. Does not support CUDA or OpenCL.

And run_texture_optimization.lua calls the file transfer_CNN_wrapper.lua. I was going through it right now. As far as I could see, I don't think this code has been written for CPUs as of yet. If you have time I'll suggest tweaking this code to make that happen or run this on some cloud services like on Amazon AWS, you can run torch codes on GPUs. And I think it is free for the first few weeks of usage.