Thank you for your excellent work. I have a few questions regarding the current code:
Is there no release for the dataset generation code? I noticed that many language-goal examples involve rooms, such as "the white book on the coffee table in the living room." However, HM3DSem does not offer room annotations. I would like to know how you determine which region corresponds to which room.
I'm not sure if I missed something, but in the episode dataset, each .json.gz file seems to only annotate the agent's starting position and the target object’s related information, but the GT trajectory (the agent's position at each step) seems to be missing. How can the trajectory be obtained for training?
Will all episodes in the .json.gz file be used during training, or are some of them filtered out?"
Thank you for your excellent work. I have a few questions regarding the current code:
Is there no release for the dataset generation code? I noticed that many language-goal examples involve rooms, such as "the white book on the coffee table in the living room." However, HM3DSem does not offer room annotations. I would like to know how you determine which region corresponds to which room.
I'm not sure if I missed something, but in the episode dataset, each .json.gz file seems to only annotate the agent's starting position and the target object’s related information, but the GT trajectory (the agent's position at each step) seems to be missing. How can the trajectory be obtained for training?
Will all episodes in the .json.gz file be used during training, or are some of them filtered out?"