bowenc0221 / panoptic-deeplab

This is Pytorch re-implementation of our CVPR 2020 paper "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation" (https://arxiv.org/abs/1911.10194)
Apache License 2.0
585 stars 117 forks source link

Faster postprocessing #67

Open jsaric opened 3 years ago

jsaric commented 3 years ago

I created a vectorized version of postprocessing for panoptic segmentation. The changes are mostly in function merge_semantic_and_instance.

My experiments showed that the postprocessing time is reduced around 3 times.

Please check it out, and let me know if you have any ideas on how to make it faster.

bowenc0221 commented 3 years ago

Thanks for the effort to make post-processing faster! In our original implementation, we wrote a CUDA kernel (but it was in Tensorflow) to do the post-processing so it is super fast. Are you interested to give it a try?

For this PR, I will take a look when I have time.

jsaric commented 3 years ago

I think I will try to do it, but I can't make any promises.

As far as I understand, you are not allowed to share any details about your Tensorflow implementation? I was wondering If you can say, which stages of postprocessing were done in native Tensorflow and which were part of the CUDA kernel? For example, is it possible to do the parts included in functions find_instance_center and group_pixels faster in the CUDA kernel?

bowenc0221 commented 3 years ago

IC, only the merge_semantic_and_instance function is written in CUDA kernel in the original TF implementation. All other methods (find_instance_center, group_pixels) are implemented with existing TF ops pretty much similar to the current Pytorch version.

jsaric commented 3 years ago

I managed to write function merge_semantic_and_instance in CUDA with cupy. I created new pull request. Check it out.

You can close this one, and we can move the discussion to the new one.