Closed phongnhhn92 closed 4 years ago
Sorry that Matterport is a pain -- we would have released the images but weren't allowed to If the check is failing it means the images being generated on the fly are not the ones we used, so the numbers you get when you comment out the line are not comparable (and this is what I would expect -- hence having the check).
You have two choices:
You could just use the new set of images (but then please run an evaluation on KITTI or the RealEstate dataset to ensure that your version of pytorch is running the models correctly)
Try to diagnose why you're getting different generated images. My hunch is that it's because you're not loading in the dataset splits correctly. These splits are https://github.com/facebookresearch/synsin/tree/master/data/scene_episodes and they decide the environment to load in. These splits are generated if they are not found here, so the code would still run but the splits would be wrong. I would check that the right splits are being found in that part of the code. I hope that helps!
I also encountered this error, and I figured out that this was a habitat version issue.
After switching back to commits
habitat-sim: d383c2011bf1baab2ce7b3cd40aea573ad2ddf71
habitat-api: e94e6f3953fcfba4c29ee30f65baa52d6cea716e
as mentioned in https://github.com/facebookresearch/synsin/issues/2#issuecomment-605827875, now the evaluation runs as expected.
Hi, I have been trying to replicate your result on the MP3D dataset and I receive this error :
So i guess the problem is in this line https://github.com/facebookresearch/synsin/blob/82ff948f91a779188c467922c8f5144018b40ac8/evaluation/eval.py#L74 where you are trying to compare the pose of the first item in the batch with a cached pose from a txt file. In my case, it doesnt seems to match.
The only change is that I have to edit the opts.config since you hardcoded it https://github.com/facebookresearch/synsin/blob/82ff948f91a779188c467922c8f5144018b40ac8/evaluation/eval.py#L119 as your own. This is my config:
opts.config = '/home/phong/data/Work/Paper3/Libraries/habitat-api/configs/tasks/pointnav_rgbd.yaml'
When I tried to comment out the checking function https://github.com/facebookresearch/synsin/blob/82ff948f91a779188c467922c8f5144018b40ac8/evaluation/eval.py#L202 then I can run the evaluation code but I got worse result than yours in both paper and this repo. Is there anyway to fix this ?