Thanks for sharing open source。
While there are differences in batch size inference performance mentioned in the readme, I noticed in the inference code only the provision of multiple batch variables for input images. I haven't seen the implementation of multi-batch for subsequent inference and decoding results. Is there any plan for future support?
Thanks for sharing open source。 While there are differences in batch size inference performance mentioned in the readme, I noticed in the inference code only the provision of multiple batch variables for input images. I haven't seen the implementation of multi-batch for subsequent inference and decoding results. Is there any plan for future support?