Closed Shanmy closed 1 year ago
Another question: I'm new to 360 panorama images and let me know if I'm wrong. I thought those images should always have a 2:1 aspect ratio as they are spanning 180 in elevation and 360 in azimuth. Why do all the images in DensePASS have a aspect ratio of 2048:400? Thanks!
Hi
Regarding 2048:400, this resolution corresponds to real panoramic cameras (360x70) for autonomous driving like those in DS-PASS: https://github.com/elnino9ykl/DS-PASS
Our method can generalize well to 2048:1024 panoramas according to recent studies. https://sat2density.github.io
Hi! Thanks for your great work -- it's fantastic! I'm trying to replicate the results in your Figure 9 on outdoor panoramic images from DensePASS dataset. Specifically, I found that the best mIoU is achieved by Trans4PASS+ small so I'm using this testing command
python tools/eval_dp.py --config-file configs/cityscapes/trans4pass_plus_small_512x512.yaml
. However, the resulting segmentation I got is slightly worse that what you showed in Figure 9.For example, this is the shown result:
And this is what I got (sorry for the different color palette!):
Though similar, there are some noticeable minor differences. I wonder if I'm not using the right configuration/model? Could you point me to the correct set up to replicate these results? Thanks!
Hi, thanks for asking. I think you could try to use different configuration files and model sizes to generate the visualization result of interest. Because models might have slightly different performance on specific sample image. But I think you are right, the Small version model has general better results in mIoU.
Hello! I use the command is "python tools/eval_dp py - config file - configs/cityscapes/trans4pass_plus_small_512x512 yaml 'would like to ask a question Visualize how should do? Notice that the output of the network is both negative and positive. But no rules for mapping network output to labels are found in the code
Hi! Thanks for your great work -- it's fantastic! I'm trying to replicate the results in your Figure 9 on outdoor panoramic images from DensePASS dataset. Specifically, I found that the best mIoU is achieved by Trans4PASS+ small so I'm using this testing command
python tools/eval_dp.py --config-file configs/cityscapes/trans4pass_plus_small_512x512.yaml
. However, the resulting segmentation I got is slightly worse that what you showed in Figure 9.For example, this is the shown result:
And this is what I got (sorry for the different color palette!):![386_pred](https://user-images.githubusercontent.com/31761400/227813200-d801bf84-4bb8-4d9a-b951-2e9ab78bb45c.png)
Though similar, there are some noticeable minor differences. I wonder if I'm not using the right configuration/model? Could you point me to the correct set up to replicate these results? Thanks!