Closed GongYanfu closed 2 years ago
I have updated the readme.
ok, thank you. I will try it.
Now my robot can move, but I still have a puzzle:Is the functional package map_collector in ros folder used for collecting images in real environment? How should we use it? Can you add steps for using it? Thank you.
map_collector should collect a grid-world map automatically, but it relies on more precise location information. On my robot, the odometry was not precise enough and I had to reset the turtlebot often to get enough precision.
I'm really sorry about that because the current situation is pressing for me. I need to know how did you get the dataset in the real world? Do you still have the code for this part? Especially the grid and positions in the dataset? I'd love to know where they come from? if only use the code of map_collector, can you tell me how to use it ?
I don't remember correctly, but I think I was using the following file: https://github.com/jkulhanek/robot-visual-navigation/blob/master/ros/src/map_collector/src/controller.py and manually running it with different coordinates I believe. I think I generated the list of coordinates I wanted to collect and then just executed this file.
Can you tell us how to use the map_collector package? Use it alone or does it need to be used with other code? Can you give us some hints? thanks I have read your all code in ros folder, but I dont know how dou you get grid and pisitions , as well as the augmented_images and augmented_depths。 I think there is some code about getting dataset you dont show us.
I am sorry I omitted some scripts needed to build the hdf5 dataset. I added the scripts now. The map_collector package should generate an info.txt file as well as images and depths folder. Then you should run the https://github.com/jkulhanek/robot-visual-navigation/blob/master/scripts/build_grid_dataset.py and https://github.com/jkulhanek/robot-visual-navigation/blob/master/scripts/compile_grid_dataset.py scripts to get the final dataset.
I consulted my colleague and we came up with the following instructions on how to collect the dataset:
Activate local environment with all python dependencies installed (follow README in the root of this repository).
Copy this repository to turtlebot.
Run the following in both your local repository and the turtlebot's copy of the repository
cd ros
catkin_make
source devel/setup.bash
Run the following command (in the background) that will start the service collecting the images and storing them to /mnt/data/dataset
. You can change this path in https://github.com/jkulhanek/robot-visual-navigation/blob/e2317b9b23f8c1f655770259f2e52dbf97db691d/ros/src/map_collector/src/collector.py#L7
roslaunch map_collector/launch/start.launch &
Prepare a list of coordinates for the robot to collect. Note that the robot will be collecting them only in the direction of its movement so we recommend to do a snake-like path through the grid to collect all images from all positions. Each position is specified by
the --by {x} {y}
argument. Run this command as many times as you want until you collect the entire dataset.
rosrun map_collector controller.py \
--by 1 2
--by 2 2
--by 3 2
--goal 4 2
Copy the dataset from /mnt/data/dataset
to your local computer, activate a python environment and run the following commands from the root of this repository to get the final .hdf5
dataset:
python scripts/build_grid_dataset.py {path to the dataset folder}
python scripts/compile_grid_dataset.py {path to the dataset folder}.hdf5
The final dataset should be located at {path to the dataset folder}_compiled.hdf5
.
Please let me know if it works.
Tomorrow I'll try it.
This figure is plotted by visualize.py in map_collector and the red points are missed, the blue is the actual points of robot, there is question about it: should I plot this figure according to my own real environment? Does it mean that I need change the code of required_points function for this figure?
Yes, you can change it as it suits you.
Can you tell me the idea of generating these points by using the required_points function? It's so hard to understand the steps of get images and depths in real environment
The reason for generating the points is for you to visualize if the robot is correctly placed on the grid. It has nothing to do with collecting the images. I updated the readme with the steps needed to collect the images.
Are these points obtained through using the required_points function critical to the movement of the robot while acquiring the image? When using controller.py to move the robot, should the input position coordinates like"--by x y" be the coordinates of the generated point in the figure?
The figure is there only to guide you to which points you should collect. If you set the figure correctly, then yes, --by x y should be the coordinates of the points. But the figure is not required.
Does it mean that I can also control the robot through the keyboard to reach a certain point in the environment to obtain images in the environment?
You control the robot by running the controller.py script.
Images are collected automatically. You don't need to specify the actual grid points. Just run along the discrete points you want to collect and the scripts will take care of aligning the collected images to the correct grid points.
ok, I understand this now. but what's the main idea of getting the discrete points? or how should I set those discrete points? Is there any rule for getting them? Do you get those discrete points accodring to your office layouts? is the outline of the outermost layer of points similar to your office environment?
There is no rule. Set the points as you wish. I don't know your end goal, so I don't know what data you need to collect. In our case, we set the points according to the office layout.
Now I just want to reproduce the code of your paper first. And then try to do experiments.
How do you install the deepmind_lab? I have tried to intsall it , but didn't success.
I believe that after adding the scripts to generate the dataset and updating the readme the issue about the ROS workspace is resolved now. Therefore I am closing this issue. If you still have some problems please open a new issue.
@GongYanfu regarding your dmlab inquiry, please refer to the dmlab docs or open an issue in the appropriate repository.
Are these points obtained using the required_points function critical to the movement of the robot while acquiring the image? When using controller.py to move the robot, do the input position coordinates use the coordinates of the generated point in the figure?
At 2022-07-14 16:37:13, "Jonáš Kulhánek" @.***> wrote:
The reason for generating the points is for you to visualize if the robot is correctly placed on the grid. It has nothing to do with collecting the images. I updated the readme with the steps needed to collect the images.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
Hello, I find the part about ros in your code was submitted two years ago, some of the steps are vague to me, such as the part I circled with a red box in the figure below, especially the two weights.pth files acquisition. I do not find the weights.pth of the turtlebot-end model in the downloaded models. Could you please update the readme.txt inside this ros folder to be more clear if possible? thank you