Open snavely opened 3 years ago
Hi @snavely,
Thanks for your suggestion! We use a simple renderer to render the high-res images, and generate the low-res ones via image downsampling. I think both steps might cause the aliasing problem.
I am not sure whether I have time to rerender the whole dataset in the next few months but I will definitely try to fix this problem in the future. If it is urgent for you, you could consider generating low-res blended images directly from texture meshes and low-res input images
Got it. Thank you!
Thank you for the wonderful dataset! I have noticed significant aliasing in the images in the dataset (especially the low-resolution data, but also the high-resolution data). Here are some examples:
High-resolution image: https://github.com/kwea123/BlendedMVS_scenes/blob/master/large/5afacb69ab00705d0cefdd5b.jpg
(Aliasing is noticable in various places, but maybe most noticeable in the race track in the stadium in the background (top middle) of image.)
Low-resolution image (5bf26cbbd43923194854b270\blended_images\00000003.jpg):
(Aliasing is evident in the rooftops, as well as in the wires crossing the water.)
Perhaps there is some aliasing from the rendering process (if anti-aliasing is not used when rendering high-res textures, and perhaps there is some aliasing resulting from downsampling high-resolution images to low-resolution ones.
I'm thinking this aliasing might hinder learning from this dataset. Is it possible to look into this issue and potentially rerender the dataset?