google-research / deeplab2

DeepLab2 is a TensorFlow library for deep labeling, aiming to provide a unified and state-of-the-art TensorFlow codebase for dense pixel labeling tasks.
Apache License 2.0
998 stars 157 forks source link

inference speed much slower than paper #143

Open 111368001 opened 1 year ago

111368001 commented 1 year ago

The fps in paper is 22.8 (0.0438s / im) image

but I test the DeepLab_COCO_Demo.ipynb on colab, this is result, seems very slow, why? any explan?

image

Also, I test this demo at local machine to compare with MaskFormer 、 Mask2Former , the resnet50_kmax_deeplab_coco_train is much slower too, but in the paper it should be fast.

yucornetto commented 1 year ago

Thanks for your interest in our work.

It seems to me that you did not use GPU for inference. The FPS numbers in the paper are reported with a V100 GPU, as in MaskFormer/Mask2Former :)

Please let me know if you have any more questions

111368001 commented 1 year ago

Thanks for your interest in our work.

It seems to me that you did not use GPU for inference. The FPS numbers in the paper are reported with a V100 GPU, as in MaskFormer/Mask2Former :)

Please let me know if you have any more questions

I do use a Colab GPU. image

And this is the speed compare between MaskFormer (R101)、 Mask2Former (R101)、kmax_deeplab (R50),show in picture (left to right) In the paper it should be kmax_deeplab < MaskFormer < Mask2Former / seconds but the test result is MaskFormer < Mask2Former < kmax_deeplab / seconds image image

yucornetto commented 1 year ago

Thanks for the clarification.

Please double-check if GPU is indeed in use (e.g., you may check the power/mem util of the GPU, to see if it is really used). Because as far as I know, the provided model is exported for CPU mode.

If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.

111368001 commented 1 year ago

Thanks for the clarification.

Please double-check if GPU is indeed in use (e.g., you may check the power/mem util of the GPU, to see if it is really used). Because as far as I know, the provided model is exported for CPU mode.

If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.

Is it using GPU? image

I need to check the inference speed first, and it's strange that you said the exported model used by demo can only be applied to cpu mode. However, I think it is unreasonable that the Demo can't be run using the GPU because all of kmax paper provide result of using the GPU.

The ckpt provide in model_zoo.md only contain .data and .index, no *.meta file, I can't load model directly, I'm using windows system, and seems deeplab2 didn't provide the installation.md for windows.

First, I needed to verify with the pre-trained model and confirm that it had good effects than othor methods, as described in the paper, so I would apply for using the Linux server to further experiment.

====

Can you provide me kmax_resnet50_coco_train and kmax_resnet50_cityscapes_train exported model (GPU)?

111368001 commented 1 year ago

If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.

About directly use the model file, any example?

yucornetto commented 1 year ago

forward_pass = tf.function(model.call, input_signature=[tf.TensorSpec(shape=input_shape)], jit_complit=True) #Some forward as warm up. forward_pass(input_tensor)[common.PRED_PANOPTIC_KEY] #Measure the time of the code below. forward_pass(input_tensor)[common.PRED_PANOPTIC_KEY]

111368001 commented 1 year ago

If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.

  1. How to directly use the model file and use gpu to get the result picture like demo?
  2. I see there is a file called export_model.py, can I use this to export kmax model provided in the model_zoo.md? Any usage example?
PritiDrishtic commented 1 year ago

Hello, I am having trouble exporting the model to use the GPU. If any one have been exported models using GPU, do let me know.