reiniscimurs / DRL-robot-navigation

Deep Reinforcement Learning for mobile robot navigation in ROS Gazebo simulator. Using Twin Delayed Deep Deterministic Policy Gradient (TD3) neural network, a robot learns to navigate to a random goal point in a simulated environment while avoiding obstacles.
MIT License
487 stars 97 forks source link

/scan #70

Closed hzxBuaa closed 11 months ago

hzxBuaa commented 11 months ago
data = None
while data is None:
    try:
        data = rospy.wait_for_message('/r1/front_laser/scan', LaserScan, timeout=0.5)
    except:
        pass

**laser_state = np.array(data.ranges[:])**
v_state = []
v_state[:] = self.velodyne_data[:]
laser_state = [v_state]

Hi, your code radar takes 3D (velodyne_data) 20 sets of point cloud data as the state input, I want to change it into a 2D lidar input, but my 2D lidar here has 720 sets of data, how can I convert the 720 sets Become 20 groups? How to choose?

hzxBuaa commented 11 months ago

Also, how should I modify the code to change from 3D 'Pointcloud2' to 2D 'LaserScan' in order to modify less code? Thank you in advance!

hzxBuaa commented 11 months ago
self.gaps = [[-1.6, -1.57 + 3.14 / 20]]
for m in range(19):
    self.gaps.append([self.gaps[m][1], self.gaps[m][1] + 3.14 / 20])
self.gaps[-1][-1] += 0.03
for j in range(len(self.gaps)):
    if self.gaps[j][0] <= beta < self.gaps[j][1]:
         self.velodyne_data[j] = min(self.velodyne_data[j], dist)
          break

And in this code, 'self.gaps' represents what? Thank you very much!!!

reiniscimurs commented 11 months ago

I see you are using the melodic branch. Since it is not updated there are some issues there. For instance `data = None while data is None: try: data = rospy.wait_for_message('/r1/front_laser/scan', LaserScan, timeout=0.5) except: pass

laser_state = np.array(data.ranges[:])` Is actually never used as state representation. I do think we detect collisions from there.

Please read the tutorial to familiarize yourself with the code. The gaps and laser selection is explained there: https://medium.com/@reinis_86651/deep-reinforcement-learning-in-mobile-robot-navigation-tutorial-part4-environment-7e4bc672f590

You could use it for 2D laser as a source if you want and pass it through the same function if it is set up properly.

Also, you can take a look at another repo where two 2D laser scans are used and 720 laser scan values brought down to 20 representative values: https://github.com/reiniscimurs/GDAE/blob/main/Code/GDAM_env.py

hzxBuaa commented 11 months ago
    laser_state = np.array(
        [min(laser_in[0:20]), min(laser_in[21:60]),
         min(laser_in[61:100]), min(laser_in[101:140]),
         min(laser_in[141:180]), min(laser_in[181:220]),
         min(laser_in[221:260]), min(laser_in[260:300]),
         min(laser_in[301:340]), min(laser_in[341:380]),
         min(laser_in[381:420]), min(laser_in[421:460]),
         min(laser_in[461:500]), min(laser_in[501:540]),
         min(laser_in[541:580]), min(laser_in[581:620]),
         min(laser_in[621:660]), min(laser_in[661:700]),
         min(laser_in[701:719])])
    laser_state = laser_state / 10

Hi, I see the code ,but why 720 laser scan values brought down to 19 representative values?How should one choose?

reiniscimurs commented 11 months ago

Not sure what you mean by "how should one choose"? You just choose how many gaps you want and bin the data accordingly. The reasoning is explained in the tutorial and in the paper: https://ieeexplore.ieee.org/abstract/document/9645287