As I understand your paper and code, I think with the Exploration_Environment, at each time step, I can get the observation (the image from simulation), the global ground-truth map 480x480 pixels), and the local pose of the robot.
Thereby, I can you this information and provide a goal (a pixel on the map) to feed to the local policy to navigate.
I think I'll modify the exploration_env.py so that I can return self.explorable_map and self.curr_loc_gt at each time step.
Is it possible for me to doing so? Thank you for your consideration.
Yes, that's correct, you can pass the ground truth map and pose as a part of the info dictionary from the Environment class in the exploration_env.py file.
As I understand your paper and code, I think with the Exploration_Environment, at each time step, I can get the observation (the image from simulation), the global ground-truth map 480x480 pixels), and the local pose of the robot. Thereby, I can you this information and provide a goal (a pixel on the map) to feed to the local policy to navigate.
I think I'll modify the
exploration_env.py
so that I can returnself.explorable_map
andself.curr_loc_gt
at each time step.Is it possible for me to doing so? Thank you for your consideration.