jamycheung / Trans4PASS

Repository of Trans4PASS (accepted to CVPR2022)
Apache License 2.0
72 stars 17 forks source link

Replicate Figure 9 on DensePASS dataset #8

Closed Shanmy closed 1 year ago

Shanmy commented 1 year ago

Hi! Thanks for your great work -- it's fantastic! I'm trying to replicate the results in your Figure 9 on outdoor panoramic images from DensePASS dataset. Specifically, I found that the best mIoU is achieved by Trans4PASS+ small so I'm using this testing command python tools/eval_dp.py --config-file configs/cityscapes/trans4pass_plus_small_512x512.yaml. However, the resulting segmentation I got is slightly worse that what you showed in Figure 9.

For example, this is the shown result:

Screen Shot 2023-03-26 at 5 00 24 PM

And this is what I got (sorry for the different color palette!): 386_pred

Though similar, there are some noticeable minor differences. I wonder if I'm not using the right configuration/model? Could you point me to the correct set up to replicate these results? Thanks!

Shanmy commented 1 year ago

Another question: I'm new to 360 panorama images and let me know if I'm wrong. I thought those images should always have a 2:1 aspect ratio as they are spanning 180 in elevation and 360 in azimuth. Why do all the images in DensePASS have a aspect ratio of 2048:400? Thanks!

elnino9ykl commented 1 year ago

Hi

Regarding 2048:400, this resolution corresponds to real panoramic cameras (360x70) for autonomous driving like those in DS-PASS: https://github.com/elnino9ykl/DS-PASS

Our method can generalize well to 2048:1024 panoramas according to recent studies. https://sat2density.github.io

jamycheung commented 1 year ago

Hi! Thanks for your great work -- it's fantastic! I'm trying to replicate the results in your Figure 9 on outdoor panoramic images from DensePASS dataset. Specifically, I found that the best mIoU is achieved by Trans4PASS+ small so I'm using this testing command python tools/eval_dp.py --config-file configs/cityscapes/trans4pass_plus_small_512x512.yaml. However, the resulting segmentation I got is slightly worse that what you showed in Figure 9.

For example, this is the shown result: Screen Shot 2023-03-26 at 5 00 24 PM

And this is what I got (sorry for the different color palette!): 386_pred

Though similar, there are some noticeable minor differences. I wonder if I'm not using the right configuration/model? Could you point me to the correct set up to replicate these results? Thanks!

Hi, thanks for asking. I think you could try to use different configuration files and model sizes to generate the visualization result of interest. Because models might have slightly different performance on specific sample image. But I think you are right, the Small version model has general better results in mIoU.

steven30currry commented 8 months ago

Hello! I use the command is "python tools/eval_dp py - config file - configs/cityscapes/trans4pass_plus_small_512x512 yaml 'would like to ask a question Visualize how should do? Notice that the output of the network is both negative and positive. But no rules for mapping network output to labels are found in the code