Closed cTatu closed 2 years ago
Hi @cTatu,
There are a few options that could be explored to run it in realtime, on a smaller form factor device:
Accuracy trade-off required for these options has to be empirically evaluated. Hope this helps!
Yes very helpful thank you very much!
Hello again,
I have a question about spynet. It is necessary for inference as well? or just for training?
Thank you
Hi,
It's required only for the training. If you do not wish to load spynet weights before inference, you change these two lines:
to
spynet_pretrained=None,
Hi,
In the paper you mentioned that the inference was made on an Nvidia RTX GPU and that the latency wasn't so great. Do you think that there could be some tweaks that could be done in order to execute inference in real time on a low power device like a smartphone or a laptop with a small iGPU?