Closed zsh2000 closed 1 year ago
Hi @zsh2000 ,
Thank you for your interest in our work! We have reused the code from IBRNet to avoid differences in implementation. Regarding your question about aspect ratio, I do not think that really matters here since we are cropping and not resizing. I think the current implementation ensures a fixed total area i.e 400 x 600 and appropriately changes the width based on the randomly selected height.
Apologies for the delayed response.
Thanks for your reply!
Dear authors,
Thanks for your great work! I have a question about the image cropping operations during training.
Starting from L123 of ./gnt/data_loaders/llff.py https://github.com/VITA-Group/GNT/blob/c1177f4499ec6381d3d2b862f681390646a7c50d/gnt/data_loaders/llff.py#L123 there are some cropping operations during training with the LLFF dataset.
In the default setting, when factor = 4 for LLFF dataset, the original resolution should be 1008756. I think crop_h = np.random.randint(low=250, high=750) means getting a cropped patch with height within [250, 750]. But crop_w = int(400 600 / crop_h) does not give corresponding width which renders the patch.
I think there should be something like crop_w = int(600 * crop_h / 400), but in this case the ratio of the height / width of the cropped patch becomes 3: 2, which is different from the original one which is 4: 3. I'm wondering whether there are some bugs.
Thank you in advance!