Closed donnydonny123 closed 5 years ago
What is the input resolution for the table that you quoted? And which graphics card are you using?
Hi,
In the original paper's conclusion, 1024*436 images is used for benchmark.
The command I am using is the same as provided in https://github.com/sniklaus/pytorch-pwc#usage ,
Which is python run.py --model default --first ./images/first.png --second ./images/second.png --out ./out.flo
GPU is GTX 1080 ti.
Thank you for your reply.
Interesting find, thank you for bringing this up! Calling moduleNetwork
in a for loop using the provided images clocks in at around 130 ms per estimate on my 1080 Ti, still much too slow. Note to everyone reading this and who might not be aware of the asynchronous computation nature of GPU computing, making sure to use torch.cuda.synchronize()
to get the correct timing. Have you tried benchmarking the official release? If so, what is the timing you get? Thanks!
Closing the issue due to inactivity, I would still love to hear more about this though!
Hi @sniklaus ,
In the original PWC-net paper, PWC-net's processing time is about 28ms.
But when I measured the time passed in
run.py
's estimate function and moduleNetwork, they cost about 330ms for estimate function and 173ms for moduleNetwork, which is too large to ignore.I want to know that is there a reason that causes this issue. Thanks!