Closed jianffu closed 1 year ago
1. collect real world dataset
there are two approach to do the task
1). following the instruction in Collecting the original dataset for training in readme.md
rosrun map_collector controller.py --by 1 2 --by 2 2 --by 3 2 --goal 4 2
The AGV can only collect images in the setting path and only in the forward angle.
here is the running result:
2). explore mode in map_collector ROS folder when we run explore function in ros/map_collector/src/main.py, the AGV move from (0,0) to (1,0), then (2,0), finally it come back (0,0) in old way. It can collect pictures in several angles in each positon.
here is the running result:
2. training (python train.py turtlebot)
we plan to use {name}_compiled.hdf5 from 1 to train our own model as for agent in Reinforcement learning, each positon in hdf5 dataset must have 4 angles(0, 1, 2, 3 seperately)
In the turtle_room_compiled.hdf5 file, the dimention size 20 x 10 x 4 x 10 means each of 4 angles can store 10 photos.
So, our question is which approach is correct for training? we think the setting path way 1) does not work in Reinforcement learning during traing. While, the explore method 2) when we tested can not collect the whole indoor playground. we need collect all the images and infomations of the ground positions which AGV can visit. Is the idea true?
When I run the command to collect the dataset based on the readme
roslaunch map_collector/launch/start.launch
rosrun map_collector controller.py --by 1 2 --by 2 2 --by 3 2 --goal 4 2
I'm having some problems, the robot moves after running it, but it doesn't create a dataset folder under the /mnt/data/ folder, no images are collected.
1. collect real world dataset
there are two approach to do the task
1). following the instruction in Collecting the original dataset for training in readme.md
The AGV can only collect images in the setting path and only in the forward angle.
here is the running result:
2). explore mode in map_collector ROS folder when we run explore function in ros/map_collector/src/main.py, the AGV move from (0,0) to (1,0), then (2,0), finally it come back (0,0) in old way. It can collect pictures in several angles in each positon.
here is the running result:
2. training (python train.py turtlebot)
we plan to use {name}_compiled.hdf5 from 1 to train our own model as for agent in Reinforcement learning, each positon in hdf5 dataset must have 4 angles(0, 1, 2, 3 seperately)
In the turtle_room_compiled.hdf5 file, the dimention size 20 x 10 x 4 x 10 means each of 4 angles can store 10 photos.
So, our question is which approach is correct for training? we think the setting path way 1) does not work in Reinforcement learning during traing. While, the explore method 2) when we tested can not collect the whole indoor playground. we need collect all the images and infomations of the ground positions which AGV can visit. Is the idea true?