jkulhanek / robot-visual-navigation

Visual Navigation in Real-World Indoor Environments Using End-to-End Deep Reinforcement Learning Official Implementation
MIT License
58 stars 11 forks source link

[question] collect real world dataset for training #21

Closed jianffu closed 1 year ago

jianffu commented 1 year ago

1. collect real world dataset

there are two approach to do the task

1). following the instruction in Collecting the original dataset for training in readme.md

rosrun map_collector controller.py \ --by 1 2 --by 2 2 --by 3 2 --goal 4 2

The AGV can only collect images in the setting path and only in the forward angle.

here is the running result:

截屏2022-09-20 16 00 58

2). explore mode in map_collector ROS folder when we run explore function in ros/map_collector/src/main.py, the AGV move from (0,0) to (1,0), then (2,0), finally it come back (0,0) in old way. It can collect pictures in several angles in each positon.

here is the running result: 未命名

2. training (python train.py turtlebot)

we plan to use {name}_compiled.hdf5 from 1 to train our own model as for agent in Reinforcement learning, each positon in hdf5 dataset must have 4 angles(0, 1, 2, 3 seperately)

In the turtle_room_compiled.hdf5 file, the dimention size 20 x 10 x 4 x 10 means each of 4 angles can store 10 photos.

截屏2022-09-20 14 54 14

So, our question is which approach is correct for training? we think the setting path way 1) does not work in Reinforcement learning during traing. While, the explore method 2) when we tested can not collect the whole indoor playground. we need collect all the images and infomations of the ground positions which AGV can visit. Is the idea true?

jkulhanek commented 1 year ago
  1. The first method should be preferable for collecting the images as there is a lower error in the odometry and the positions should be more precise.
  2. Yes, you need to collect the entire map for the training.
jiacheng-yao123 commented 7 months ago

1. collect real world dataset

there are two approach to do the task

1). following the instruction in Collecting the original dataset for training in readme.md

rosrun map_collector controller.py --by 1 2 --by 2 2 --by 3 2 --goal 4 2

The AGV can only collect images in the setting path and only in the forward angle.

here is the running result: 截屏2022-09-20 16 00 58

2). explore mode in map_collector ROS folder when we run explore function in ros/map_collector/src/main.py, the AGV move from (0,0) to (1,0), then (2,0), finally it come back (0,0) in old way. It can collect pictures in several angles in each positon.

here is the running result: 未命名

2. training (python train.py turtlebot)

we plan to use {name}_compiled.hdf5 from 1 to train our own model as for agent in Reinforcement learning, each positon in hdf5 dataset must have 4 angles(0, 1, 2, 3 seperately)

In the turtle_room_compiled.hdf5 file, the dimention size 20 x 10 x 4 x 10 means each of 4 angles can store 10 photos. 截屏2022-09-20 14 54 14

So, our question is which approach is correct for training? we think the setting path way 1) does not work in Reinforcement learning during traing. While, the explore method 2) when we tested can not collect the whole indoor playground. we need collect all the images and infomations of the ground positions which AGV can visit. Is the idea true?

When I run the command to collect the dataset based on the readme roslaunch map_collector/launch/start.launch

rosrun map_collector controller.py --by 1 2 --by 2 2 --by 3 2 --goal 4 2

I'm having some problems, the robot moves after running it, but it doesn't create a dataset folder under the /mnt/data/ folder, no images are collected.

code

1705587753847