Open tommy19970714 opened 3 years ago
I am also interested in running it on CPU only
I found that those two lines actually don't affect the inference on CPU. I make reference on CPU by the following steps:
You can directly remove three things in correlation.py:
1. remove 'import cupy'
2. remove
@cupy.util.memoize(for_each_device=True)
def cupy_launch(strFunction, strKernel):
return cupy.cuda.compile_with_cache(strKernel).get_function(strFunction)
3. remove 'raise NotImplementedError() :'
4. change all .cuda() to .to(device) in all files
device = torch.device("cpu")
In this way I can make inference on my CPU-only laptop.
Hope it helps you.
Thanks for your great work!
I have ran your code and the result was so good and the execution speed was quite fast. I actually compared the speed and it was as shown below.
Therefore, it has the potential to be used in real time on mobile and other devices. I have successfully run gen_model on cpu, but warp_model was not possible due to unimplemented parts only on CPU.
There are the following two unimplemented parts of CPU inference. https://github.com/geyuying/PF-AFN/blob/50f440b2c103b287194cfb67d4d42396cf3905c0/PF-AFN_test/models/correlation/correlation.py#L331
https://github.com/geyuying/PF-AFN/blob/50f440b2c103b287194cfb67d4d42396cf3905c0/PF-AFN_test/models/correlation/correlation.py#L385
Is it possible to support CPU inference for warp_model?
You are using
cupy_launch
, but perhaps the following issue might be helpful. https://github.com/sniklaus/pytorch-pwc/issues/39