minosworld / minos

MINOS: Multimodal Indoor Simulator
MIT License
201 stars 33 forks source link

pathNumDoors in episode_states_suncg.csv #38

Open kojimano opened 6 years ago

kojimano commented 6 years ago

What exactly is pathNumDoors specified in episode_states_suncg.csv? This argument is used to filter episodes from the scenes, and they are used for files under /envs as well. I initially thought this is the parameter to remove trivial episodes by enforcing episodes to go through the number of doors specified in pathNumDoors. However, this is not True. For example row 33302 of episode_states_suncg.csv. This episode with pathNumDoors 1 and pathDist 1.273 apparently does have zero doors from a starting point to a goal point. Another problem I have been observing within some episodes is that the agents are never able to reach from start to goal. This is due to all free spaces are not necessarily connected to each others on the ground truth map, and since the start and end point is randomly sampled from free spaces, some bad episodes are generated. I think this can be confirmed and removed when the a-star search finds no pathes given map, start and endpoint.

kojimano commented 6 years ago

This issue occurs since MINOS does not have a functionality to enforce the custom goal position when working with a pointgoal. (But default is randomly sampling the goal position.) This is also causing the varying episode conditions in the test set. I did a simple fix, but this should be fixed officially since this is a significant bug. After fixing this evaluation pipeline for evaluation. UNREAL agent for pointgoal performed significantly worse than the score reported on MINOS technical report. (This is quite expected since there will be no episodes you can just reach a goal only by following the orientation of the goal.)

To summarize my fix.

  1. Fix RoomSimulator.py to handle pointgoal goal case (line 101~) 2018-04-10 12 47 30

  2. Fix SimState.js to handle the input from RoomSimulator.py (line 399~) (Do not forget to rebuild using build.sh)

2018-04-10 12 49 03

kojimano commented 6 years ago

Could anybody give a clarification whether this bug was affecting the score reported on MINOS technical report? (Assuming it is)

kojimano commented 6 years ago

Is it possible if somebody could give me a follow up on this?

nina124 commented 6 years ago

Hi, @kojimano Sorry I cannot give you some words about this bug. I just started training UNREAL but learned nothing meaningful. Could you share your training experience? I tried the unreal baseline with python3 main.py --env_type indoor --env_name pointgoal_suncg_se --parallel_size 10. But the agent failed to learn something meaningful. The tensorboard score is shown below. score_tensorboard

Do you use the default hyperparameter settings(flags in options.py) when training the environment? How much is the final score? Could you share the learning curve of score(the tensorboard result)? And Have you trained the other two enviroments, objectgoal_suncg_mf, and roomgoal_mp3d_s? How about them?

kojimano commented 6 years ago

Hi @nina124. As far as my knowledge reaches this issue hasn't been fixed on MINOS officially. However, from my experience UNREAL agent learned something meaningful using the current setting of MINOS. I did training on pointgoal and roomgoal. I three tips for you. First, try to train them on furnished room instead of empty room (UNREAL agent almost does not utilize the vision module on empty room. ) Second, probably try to use the default setting of UNREAL (not changing parallel size and etc.) Second, probably try to display agents' behavior on the val set, besides observing the scores in training (I am not entirely sure what exactly the score is pointing in your graph, but assumingly total accumulated reward). Let me know any further.

nina124 commented 6 years ago

Hi @kojimano Thanks for your help! Sorry I have been occupied by some other works. Now I just tried the pointgoal task. The score above is the episode reward in the original code. Because it contains the -0.1 reward every step, the total episode reward tends to be less than 0. So I added tensorboard summary of "success rate". Following your tips, the UNREAL agent successfully learned some meaningful policies. I haved observed the success rate increases during training. I will try other tasks. Thanks again.