jkulhanek / robot-visual-navigation

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning Official Implementation
MIT License
58 stars 11 forks source link

question about ROS workspace #11

Closed GongYanfu closed 2 years ago

GongYanfu commented 2 years ago

Hello, I find the part about ros in your code was submitted two years ago, some of the steps are vague to me, such as the part I circled with a red box in the figure below, especially the two weights.pth files acquisition. I do not find the weights.pth of the turtlebot-end model in the downloaded models. Could you please update the readme.txt inside this ros folder to be more clear if possible? thank you image

jkulhanek commented 2 years ago

I have updated the readme.

GongYanfu commented 2 years ago

ok, thank you. I will try it.

GongYanfu commented 2 years ago

Now my robot can move, but I still have a puzzle:Is the functional package map_collector in ros folder used for collecting images in real environment? How should we use it? Can you add steps for using it? Thank you.

jkulhanek commented 2 years ago

map_collector should collect a grid-world map automatically, but it relies on more precise location information. On my robot, the odometry was not precise enough and I had to reset the turtlebot often to get enough precision.

GongYanfu commented 2 years ago

I'm really sorry about that because the current situation is pressing for me. I need to know how did you get the dataset in the real world? Do you still have the code for this part? Especially the grid and positions in the dataset? I'd love to know where they come from? if only use the code of map_collector, can you tell me how to use it ?

jkulhanek commented 2 years ago

I don't remember correctly, but I think I was using the following file: https://github.com/jkulhanek/robot-visual-navigation/blob/master/ros/src/map_collector/src/controller.py and manually running it with different coordinates I believe. I think I generated the list of coordinates I wanted to collect and then just executed this file.

GongYanfu commented 2 years ago

Can you tell us how to use the map_collector package? Use it alone or does it need to be used with other code? Can you give us some hints? thanks I have read your all code in ros folder, but I dont know how dou you get grid and pisitions , as well as the augmented_images and augmented_depths。 I think there is some code about getting dataset you dont show us.

jkulhanek commented 2 years ago

I am sorry I omitted some scripts needed to build the hdf5 dataset. I added the scripts now. The map_collector package should generate an info.txt file as well as images and depths folder. Then you should run the https://github.com/jkulhanek/robot-visual-navigation/blob/master/scripts/build_grid_dataset.py and https://github.com/jkulhanek/robot-visual-navigation/blob/master/scripts/compile_grid_dataset.py scripts to get the final dataset.

jkulhanek commented 2 years ago

I consulted my colleague and we came up with the following instructions on how to collect the dataset:

  1. Activate local environment with all python dependencies installed (follow README in the root of this repository).

  2. Copy this repository to turtlebot.

  3. Run the following in both your local repository and the turtlebot's copy of the repository

    cd ros
    catkin_make
    source devel/setup.bash
  4. Run the following command (in the background) that will start the service collecting the images and storing them to /mnt/data/dataset. You can change this path in https://github.com/jkulhanek/robot-visual-navigation/blob/e2317b9b23f8c1f655770259f2e52dbf97db691d/ros/src/map_collector/src/collector.py#L7

    roslaunch map_collector/launch/start.launch &
  5. Prepare a list of coordinates for the robot to collect. Note that the robot will be collecting them only in the direction of its movement so we recommend to do a snake-like path through the grid to collect all images from all positions. Each position is specified by the --by {x} {y} argument. Run this command as many times as you want until you collect the entire dataset.

    rosrun map_collector controller.py \
    --by 1 2
    --by 2 2
    --by 3 2
    --goal 4 2
  6. Copy the dataset from /mnt/data/dataset to your local computer, activate a python environment and run the following commands from the root of this repository to get the final .hdf5 dataset:

    python scripts/build_grid_dataset.py {path to the dataset folder} 
    python scripts/compile_grid_dataset.py {path to the dataset folder}.hdf5

    The final dataset should be located at {path to the dataset folder}_compiled.hdf5.

Please let me know if it works.

GongYanfu commented 2 years ago

Tomorrow I'll try it.

GongYanfu commented 2 years ago

This figure is plotted by visualize.py in map_collector and the red points are missed, the blue is the actual points of robot, there is question about it: should I plot this figure according to my own real environment? Does it mean that I need change the code of required_points function for this figure? image

jkulhanek commented 2 years ago

Yes, you can change it as it suits you.

GongYanfu commented 2 years ago

Can you tell me the idea of generating these points by using the required_points function? It's so hard to understand the steps of get images and depths in real environment

jkulhanek commented 2 years ago

The reason for generating the points is for you to visualize if the robot is correctly placed on the grid. It has nothing to do with collecting the images. I updated the readme with the steps needed to collect the images.

GongYanfu commented 2 years ago

Are these points obtained through using the required_points function critical to the movement of the robot while acquiring the image? When using controller.py to move the robot, should the input position coordinates like"--by x y" be the coordinates of the generated point in the figure?

jkulhanek commented 2 years ago

The figure is there only to guide you to which points you should collect. If you set the figure correctly, then yes, --by x y should be the coordinates of the points. But the figure is not required.

GongYanfu commented 2 years ago

Does it mean that I can also control the robot through the keyboard to reach a certain point in the environment to obtain images in the environment?

jkulhanek commented 2 years ago

You control the robot by running the controller.py script.

jkulhanek commented 2 years ago

Images are collected automatically. You don't need to specify the actual grid points. Just run along the discrete points you want to collect and the scripts will take care of aligning the collected images to the correct grid points.

GongYanfu commented 2 years ago

ok, I understand this now. but what's the main idea of getting the discrete points? or how should I set those discrete points? Is there any rule for getting them? Do you get those discrete points accodring to your office layouts? is the outline of the outermost layer of points similar to your office environment?

jkulhanek commented 2 years ago

There is no rule. Set the points as you wish. I don't know your end goal, so I don't know what data you need to collect. In our case, we set the points according to the office layout.

GongYanfu commented 2 years ago

Now I just want to reproduce the code of your paper first. And then try to do experiments.

GongYanfu commented 2 years ago

How do you install the deepmind_lab? I have tried to intsall it , but didn't success.

jkulhanek commented 2 years ago

I believe that after adding the scripts to generate the dataset and updating the readme the issue about the ROS workspace is resolved now. Therefore I am closing this issue. If you still have some problems please open a new issue.

@GongYanfu regarding your dmlab inquiry, please refer to the dmlab docs or open an issue in the appropriate repository.

GongYanfu commented 1 year ago

Are these points obtained using the required_points function critical to the movement of the robot while acquiring the image? When using controller.py to move the robot, do the input position coordinates use the coordinates of the generated point in the figure?

At 2022-07-14 16:37:13, "Jonáš Kulhánek" @.***> wrote:

The reason for generating the points is for you to visualize if the robot is correctly placed on the grid. It has nothing to do with collecting the images. I updated the readme with the steps needed to collect the images.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>