Closed DY-ATL closed 1 month ago
Hi,
For pre-training, we indeed use 256x256 images (both for habitat and real image pairs) from which we extract 224x224 crops.
What we find the most important for downstream tasks is to both train and test at the same resolution, even if different from pre-training. This is why we use a tiling-based approach for stereo/flow at test time. While relative positional embedding helps, it is not enough to generalize to any resolution at test time.
Overall, specially once real image pairs are included, the pre-training should be effective irrespectively of the focals or resolution of the downstream tasks. Pre-training at higher resolution is likely to be better but it would be slow( DINOv2 actually did pre-train first at 224x224 before doing a second step at larger resolution, and a similar strategy could be used there if needed).
Best Philippe
Thank you for your answer!
What we find the most important for downstream tasks is to both train and test at the same resolution
Is it possible to use the training scheme in DUST3R "We randomly select the image aspect ratios for each batch (e.g. 16/9, 4/3, etc), so that at test time our network is familiar with different image shapes", so as to avoid using tiling-based inference? The tiling-based doesn't work well when the image has large textureless area, because the context information cannot be propagated from textured area to textureless area if they're in different tiles.
In my opinion, it should work yes (but we don't plan to launch such experiments on our side).
In my opinion, it should work yes (but we don't plan to launch such experiments on our side).
I see. Thank you!
Hello, I notice that the setting of habitat image generation is
When compared to the image used for downstream finetuning, there are two differences:
I wonder will it better to increase the crop size to match the downstream? Or it doesn't matter due to the relative positional embedding?