Closed NerminSalem closed 7 years ago
@leehomyc can u help?
Hi,
I have not tried using CPU but I assume it will be slower. Have you tried with CPU?
hi, thanks for replying
how can i disable using GPU?
Hello,
To port to cpu you can put gpu=0 as options and then in the code write a section where you will need to set the tensor and network type to float. Something like this:
if opt.gpu==0 then
sampletensor:float();
net:float();
else
sampletensor:cuda()
net:cuda()
end
There can be some more details in this implementation that I missed. I hope you get the gist.
Hope I could help.
ok i got it but now i got it i m getting that error libthclnn_searchpath /home/nermin/torch-cl/install/lib/lua/5.1/libTHCLNN.so terminate called after throwing an instance of 'std::runtime_error' while running th run_texture_optimization.lua i m using cpu mode as my laptop doesn't support NIVIDIA and i installed cltorch and clnn and set gpu= -1 can u help?
These are the last four listed options in run_texture_optimization.lua.
-- mode: speed or memory. Try 'speed' if you have a GPU with more than 4GB memory, and try 'memory' otherwise. The 'speed' mode is significantly faster (especially for synthesizing high resolutions) at the cost of higher GPU memory.
-- gpu_chunck_size_1: Size of chunks to split feature maps along the channel dimension. This is to save memory when normalizing the matching score in mrf layers. Use large value if you have large gpu memory. As reference we use 256 for Titan X, and 32 for Geforce GT750M 2G.
-- gpu_chunck_size_2: Size of chuncks to split feature maps along the y dimension. This is to save memory when normalizing the matching score in mrf layers. Use large value if you have large gpu memory. As reference we use 16 for Titan X, and 2 for Geforce GT750M 2G.
-- backend: Use 'cudnn' for CUDA-enabled GPUs or 'clnn' for OpenCL.
If you do not have GPU, put the mode as memory instead of speed. Also, now that I see this, I am not sure this code has been written with an option for CPU as listed in the last option above(backend). However, if you can use OpenCL, keep the backend option as clnn. Have you tried these two things?
hello, thanks for replying i did what you mentioned and turned out that i have a problem in opencl i don't have nividia but i have a amd radeon
No, on Radeon it will not run. Does not support CUDA or OpenCL.
And run_texture_optimization.lua calls the file transfer_CNN_wrapper.lua. I was going through it right now. As far as I could see, I don't think this code has been written for CPUs as of yet. If you have time I'll suggest tweaking this code to make that happen or run this on some cloud services like on Amazon AWS, you can run torch codes on GPUs. And I think it is free for the first few weeks of usage.
hi can i use cpu instead of gpu for th run_content_network.lua
thanks