Closed FangJingYunner closed 2 years ago
Hi @FangJingYunner,
Thanks for your interest in our work! The inference times and information on the GPU are provided in Tab. S4 on page 12 of the arxiv paper. The network architecture is quite heavy and not optimized for platforms with limited resources. Therefore, it probably cannot be deployed out-of-the-box on embedded platforms.
Hi,this is a very great work! And l have some question. What GPU is used for this work as well as its inference time?can it be deployed on an embedded platform? Looking forward to your reply!