Open tymanab opened 3 months ago
I think I'm kind of understanding what happens, when the image size[H/W] is odd and resolution is not 1, convert.py's resize operation gives images_x and object_mask a slightly different size compare with the original image size / resolution.
So, how did you solve the problem?
Hi, Thanks for sharing this great project. I want to try run this project with my own dataset, but the training phased failed when calculating object cross entropy loss. The images are prepared follow Readme.
Here is the error message I got:
I also tried some other datasets, and most of them share similar problems. Lerf mask all works well, but mipnerf360 data(from the original website) such as room, garden will also get stuck here. Is there any extra preparation I need to do with my own dataset?