Closed faizan1234567 closed 1 year ago
Hi, the inference time should be mainly related to the backbone used, the attention head presented in the paper shouldn't induce too much computational overhead (although I don't have precise numbers about this). If HRNet32 is too heavy in terms of computation speed for your inference hardware, maybe you should try a smaller backbone.
Dear Author, Thanks for the amazing work. I would like to know is there any data relevant to inference time? Does your model support real inference on a video?