Thank you for your work and sharing it here.
I have 2 questions:
When inferring with run_segmentation.py the number of classes in DPTSegmentationModel is set to 150 whereas ADE20K has 2693 classes. Question: Is your model weights for DPT-Hybrid on the GitHub trained with full ADE20k dataset or it is fine tuned with another dataset. If it is full ADE20K dataset why the number of classes requested is limited to 150?
In the same script you suggest BN=True for inference of single image which is unusual (Never used batch normalization for single image). When setting to False there is a runtime error. Question: Does BN=True deteriorate the inference result?
Hello @ranftlr
Thank you for your work and sharing it here. I have 2 questions:
When inferring with run_segmentation.py the number of classes in DPTSegmentationModel is set to 150 whereas ADE20K has 2693 classes. Question: Is your model weights for DPT-Hybrid on the GitHub trained with full ADE20k dataset or it is fine tuned with another dataset. If it is full ADE20K dataset why the number of classes requested is limited to 150?
In the same script you suggest BN=True for inference of single image which is unusual (Never used batch normalization for single image). When setting to False there is a runtime error. Question: Does BN=True deteriorate the inference result?
Thank you in advance.