google-research / deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Apache License 2.0
1.01k stars 159 forks source link

[ViP-DeepLab] Add wSTQ in numpy for PVPS dataset. #152

Closed meijieru closed 1 year ago

meijieru commented 1 year ago

This PR adds wSTQ implementation in numpy. It also adds the unit test to guarantee the compatibility of tf & numpy implementation. Fix a dtype error for tf impl.

google-cla[bot] commented 1 year ago

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

markweberdev commented 1 year ago

Hi @meijieru,

Thanks for your work, I appreciate that! I ran some quick performance test to see the difference when using the PR's code. On my local machine, I was able to observe a runtime increase of a bit more than 70%. I would address this mainly due to how the confusion matrix is computed.

Given that, we run this code in some time critical environments (like benchmark server), I would suggest to not alter the code directly, but make a weighted STQ subclass that inherits from STQ. This way, we could have both performance for the base case, but also support for the weighted STQ.

What do you think @aquariusjay ?

aquariusjay commented 1 year ago

@markweberdev Thanks for reviewing the code. The suggestion sounds good.

@meijieru Could you please try to update the code to maintain the original computation speed? It would be great if wSTQ could be also efficiently computed. What do you think?

meijieru commented 1 year ago

@markweberdev Thanks for reviewing the code. The suggestion sounds good.

@meijieru Could you please try to update the code to maintain the original computation speed? It would be great if wSTQ could be also efficiently computed. What do you think?

Thanks for the suggestions! Would check it soon.

meijieru commented 1 year ago

Several updates

  1. We maintain the original computation speed, and remove sparse addition in the original implementation.
  2. Fix a problem of precision error in tf wSTQ.

Would you please check again? Thx.

aquariusjay commented 1 year ago

Looks great on my end. Thanks for updating the code, @meijieru

Please wait for the input from @markweberdev

meijieru commented 1 year ago

Updated, thanks for the suggestions.