llmpass / RSTT

Official pytorch implementation of paper "RSTT: Real-time Spatial Temporal Transformer for Space-Time Video Super-Resolution"
121 stars 21 forks source link

Is inference time right? #3

Closed gangsterless closed 2 years ago

gangsterless commented 2 years ago

Congratulations! very excellent work! But I have a small wonder. I am a beginner in DL. This question may be stupid. Shouldn't you call torch.cuda.synchronous() before start? like this: image you may refer to this https://github.com/tarun005/FLAVR/issues/14

In FLAVR, whether use torch.cuda.synchronous() leads to a significant difference in inference time. their origin paper(before use this function): image

and their current paper(after use this function): image

Could you please report your comparison?

Looking forward to your reply. Thank you a lot!

llmpass commented 2 years ago

Hi, we add calls of torch.cuda.synchronize() before start and end time, we see no changes to the timing:

''' torch.cuda.synchronize() start = time.time() n = 100 for i in range(n): with torch.no_grad(): outputs = model(inputs) torch.cuda.synchronize() end = time.time() print('fps =', n*7.0/(end-start)) '''

Comparing to FLAVR, I think the key difference here is that we compute the time for 100 times inference instead of a single time (fragment) so that the effect of synchronization is limited.

gangsterless commented 2 years ago

Thank you for your answer.