devendrachaplot / Neural-SLAM

Pytorch code for ICLR-20 Paper "Learning to Explore using Active Neural SLAM"
http://www.cs.cmu.edu/~dchaplot/projects/neural-slam.html
MIT License
760 stars 144 forks source link

What kinds of coordinate using by the Neural-SLAM and Habitat? #63

Open GuoPingPan opened 1 year ago

GuoPingPan commented 1 year ago

I am confused about the coordinate between neural-slam and habitat. Can you tell me what is the coordinate of the agent in neural-slam and the difference with habitat? I also want to know the coordinate using by your real robot collecting the noise data, this can help me better understand the transformation of your job.

Thanks a lot!

GuoPingPan commented 1 year ago

I got that the be below code is to turn the [ front = -z,left = -x, up=y] in habitat world frame to [front = x,left = y, o] in world frame[x,y,z]

     def get_sim_location(self):
        agent_state = super().habitat_env.sim.get_agent_state(0)

        x = -agent_state.position[2]
        y = -agent_state.position[0]
        axis = quaternion.as_euler_angles(agent_state.rotation)[0]
        if (axis%(2*np.pi)) < 0.1 or (axis%(2*np.pi)) > 2*np.pi - 0.1:
            o = quaternion.as_euler_angles(agent_state.rotation)[1]
        else:
            o = 2*np.pi - quaternion.as_euler_angles(agent_state.rotation)[1]
        if o > np.pi:
            o -= 2 * np.pi
        return x, y, o

and dx, dy, do = pu.get_rel_pose_change(curr_sim_pose, self.last_sim_location) is to get the relative in agent ego frame

But i am quite confused about the function get_new_pose

def get_new_pose(pose, rel_pose_change):
    x, y, o = pose
    dx, dy, do = rel_pose_change

    global_dx = dx * np.sin(np.deg2rad(o)) + dy * np.cos(np.deg2rad(o))
    global_dy = dx * np.cos(np.deg2rad(o)) - dy * np.sin(np.deg2rad(o))
    x += global_dy
    y += global_dx
    o += np.rad2deg(do)
    if o > 180.:
        o -= 360.

    return x, y, o

Why x+=global_dy and y+=global_dx? I know x,y is represented in full map frame.

What exactly the axis direction between all the frame you are using? What is relation between the full_map frame full_map = torch.zeros(num_scenes, 4, full_w, full_h).float().to(device),(you use w as vertical, h as horizon, but we always use the h as vertical and w as horizon) and the agent frame(x, y, o)?

I am quite confused and stucked for a few days, please help me to understand it. @devendrachaplot