Open jsaric opened 3 years ago
Thanks for the effort to make post-processing faster! In our original implementation, we wrote a CUDA kernel (but it was in Tensorflow) to do the post-processing so it is super fast. Are you interested to give it a try?
For this PR, I will take a look when I have time.
I think I will try to do it, but I can't make any promises.
As far as I understand, you are not allowed to share any details about your Tensorflow implementation? I was wondering If you can say, which stages of postprocessing were done in native Tensorflow and which were part of the CUDA kernel? For example, is it possible to do the parts included in functions find_instance_center and group_pixels faster in the CUDA kernel?
IC, only the merge_semantic_and_instance
function is written in CUDA kernel in the original TF implementation. All other methods (find_instance_center
, group_pixels
) are implemented with existing TF ops pretty much similar to the current Pytorch version.
I managed to write function merge_semantic_and_instance
in CUDA with cupy.
I created new pull request. Check it out.
You can close this one, and we can move the discussion to the new one.
I created a vectorized version of postprocessing for panoptic segmentation. The changes are mostly in function merge_semantic_and_instance.
My experiments showed that the postprocessing time is reduced around 3 times.
Please check it out, and let me know if you have any ideas on how to make it faster.