Open Singh-sid930 opened 3 years ago
The checkpoints trained on Matterport used all 90 scenes, so if you need to test performance on held-out data, those can't be used.
This also may be useful: https://github.com/facebookresearch/habitat-lab/blob/master/habitat_baselines/agents/ppo_agents.py
I am working off of the ppo_agents.py and my act method looks something like this :
def act(self, depth, goal,t):
batch = {'depth': depth.view(1, depth.shape[0], depth.shape[1], depth.shape[2]),
'pointgoal_with_gps_compass': goal.view(1, -1)}
if t ==0:
not_done_masks = torch.zeros(1, 1)
print("for the first step mask is zero")
else:
not_done_masks = torch.ones(1, 1)
_, actions, _, self.hidden_state = self.actor_critic.act(batch,
self.hidden_state,
self.prev_actions,
not_done_masks,
deterministic=True)
print("actions:", actions)
self.prev_actions = torch.clone(actions)
return actions.item()
Can you please confirm the inputs :
point_goal_with_gps_compass is a tensor -> [rho,phi] where rho = relative distance from robot to goal in meters and phi = relative angle from robot to goal in radians (clockwise negative and anticlockwise positive with ego-centric axis. ) Also, by setting the flag of deterministic to false I am getting this error :
File "/home/siddharth/habitat-api/habitat_baselines/common/utils.py", line 31, in sample return super().sample(sample_shape).unsqueeze(-1) TypeError: <lambda>() takes 1 positional argument but 2 were given
Am I creating the batch input wrong ? I am using the gibson-4plus-resnet50.pth model. And my depth sensor height is set at 1.25m
No idea what is going on there. My guess would be that you are on an old pytorch that we don't support. What version are you using?
I am using version 1.7.0
Odd, I have never seen that error before so I have no idea where to begin. If input sizes were wrong, something else would have happened before that.
In that case I can investigate what is going on. The input as I described above are correct than I believe? Is the habitat config file used for training the agent available somewhere on the repo or somewhere else?
Yeah, that looks correct. I always forget the direction of phi tho. The config is here: https://github.com/facebookresearch/habitat-lab/blob/master/habitat_baselines/config/pointnav/ddppo_pointnav.yaml
Hi, sorry for the late response and probably the last query. I see in the configuration the camera height, turn angle of robot etc. are not mentioned. Turn angle and forward step seems to be mentioned in the paper. Are these configurations available somewhere which were used for training ?
The configuration used for training is what I linked to above. That config overrides some values but mostly just uses the defaults, https://github.com/facebookresearch/habitat-lab/blob/master/habitat/config/default.py
The camera height is 1.25m
Can you please confirm the inputs :
- Depth is a tensor of the correct shape normalized to [0,1]
- point_goal_with_gps_compass is a tensor -> [rho,phi] where rho = relative distance from robot to goal in meters and phi = relative angle from robot to goal in radians (clockwise negative and anticlockwise positive with ego-centric axis. ) Also, by setting the flag of deterministic to false I am getting this error :
` File "/home/siddharth/habitat-api/habitat_baselines/common/utils.py", line 31, in sample return super().sample(sample_shape).unsqueeze(-1) TypeError: <lambda>() takes 1 positional argument but 2 were given`
Am I creating the batch input wrong ? I am using the gibson-4plus-resnet50.pth model. And my depth sensor height is set at 1.25m
I meet the same issue as:
File "/home/siddharth/habitat-api/habitat_baselines/common/utils.py", line 31, in sample return super().sample(sample_shape).unsqueeze(-1) TypeError: <lambda>() takes 1 positional argument but 2 were given
Could you tell me how you tackle it finally? Thx!!
Hello,
I am trying to test the DDPPO baseline on the matterport3d dataset. I am using the gibson-4plus-resnet50.pth right now for testing and to start with I am only trying to make the agent move forward and in the middle there is one wall which it should be avoiding given the depth information. (I have imported the baselines classes and working off of that). Some information about the input being passed into the model would be helpful. Please correct me if any of the following are wrong:
Also, is there a chance there is difference between the checkpoint model trained on gibson and the other on gibson and matterport3d which might be causing troubles?