Open 111368001 opened 1 year ago
Thanks for your interest in our work.
It seems to me that you did not use GPU for inference. The FPS numbers in the paper are reported with a V100 GPU, as in MaskFormer/Mask2Former :)
Please let me know if you have any more questions
Thanks for your interest in our work.
It seems to me that you did not use GPU for inference. The FPS numbers in the paper are reported with a V100 GPU, as in MaskFormer/Mask2Former :)
Please let me know if you have any more questions
I do use a Colab GPU.
And this is the speed compare between MaskFormer (R101)、 Mask2Former (R101)、kmax_deeplab (R50),show in picture (left to right) In the paper it should be kmax_deeplab < MaskFormer < Mask2Former / seconds but the test result is MaskFormer < Mask2Former < kmax_deeplab / seconds
Thanks for the clarification.
Please double-check if GPU is indeed in use (e.g., you may check the power/mem util of the GPU, to see if it is really used). Because as far as I know, the provided model is exported for CPU mode.
If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.
Thanks for the clarification.
Please double-check if GPU is indeed in use (e.g., you may check the power/mem util of the GPU, to see if it is really used). Because as far as I know, the provided model is exported for CPU mode.
If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.
Is it using GPU?
I need to check the inference speed first, and it's strange that you said the exported model used by demo can only be applied to cpu mode. However, I think it is unreasonable that the Demo can't be run using the GPU because all of kmax paper provide result of using the GPU.
The ckpt provide in model_zoo.md only contain .data and .index, no *.meta file, I can't load model directly, I'm using windows system, and seems deeplab2 didn't provide the installation.md for windows.
First, I needed to verify with the pre-trained model and confirm that it had good effects than othor methods, as described in the paper, so I would apply for using the Linux server to further experiment.
====
Can you provide me kmax_resnet50_coco_train and kmax_resnet50_cityscapes_train exported model (GPU)?
If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.
About directly use the model file, any example?
forward_pass = tf.function(model.call, input_signature=[tf.TensorSpec(shape=input_shape)], jit_complit=True) #Some forward as warm up. forward_pass(input_tensor)[common.PRED_PANOPTIC_KEY] #Measure the time of the code below. forward_pass(input_tensor)[common.PRED_PANOPTIC_KEY]
If you would like to run the model on GPU mode, please directly use the model file, instead of the exported model. Let me know if you have any more questions.
Hello, I am having trouble exporting the model to use the GPU. If any one have been exported models using GPU, do let me know.
The fps in paper is 22.8 (0.0438s / im)
but I test the DeepLab_COCO_Demo.ipynb on colab, this is result, seems very slow, why? any explan?
Also, I test this demo at local machine to compare with MaskFormer 、 Mask2Former , the resnet50_kmax_deeplab_coco_train is much slower too, but in the paper it should be fast.