Closed hzxBuaa closed 11 months ago
The code I refer to is ubuntu18.04-melodic
if random_near_obstacle:
if np.random.uniform(0, 1) > 0.85 and min(state[4:-8]) < 0.6 and count_rand_actions < 1:
count_rand_actions = np.random.randint(8, 15)
random_action = np.random.uniform(-1, 1, 2)
if count_rand_actions > 0:
count_rand_actions -= 1
action = random_action
action[0] = -1
And what does 'min(state[4:-8]) < 0.6' in the above code mean?
min_laser = 2 for i, item in enumerate(laser_state.ranges): if min_laser > laser_state.ranges[i]: min_laser = laser_state.ranges[i]
I'm sorry, this is the last question: Why does 'min_laser' take 2 here?According to which parameter of the robot? sorry for the trouble
The double publishing of cmd_vel is probably a mistake in the code, when updating the location of where the publishing should be called from. While the second cmd_vel serves no real purpose, it could safely be deleted.
And what does 'min(state[4:-8]) < 0.6' in the above code mean?
We only apply the random action if a robot is facing an obstacle directly in front of it. With this code we ignore the laser values on the sides of the robot.
Why does 'min_laser' take 2 here?According to which parameter of the robot?
If you will look at the code, you will see that min_laser is only used to detect if there is a collision (in the melodic branch). So initial value just needs to be above whatever is the collision distance. There is no parameter for this.
In general, the melodic branch is deprecated and has not been updated for a while (and will not be updated in the future) so there might be some outdated code remaining. It is probably better to look for answers and better implementation solutions directly in the noetic branch.
ok,thank you very much I saw that you did not use 2D LiDAR in your simulation, but 3D LiDAR 'Velodyne', so I would like to ask if you also use 3D LiDAR 'Velodyne' in reality? I would like to ask what is the difference between the two, I think they both measure distance? Is it possible to use two-dimensional in the simulation to converge?
Velodyne does not return distance scans like the 2D lasers, but rather a 3D pointcloud. So you need to do post-processing to obtain distance information from the pointcloud. This repo converts 3D pointcloud to distance information and "flattens" it into a 1D vector as an input into the neural network.
You can use 2D laser scan or post-processed 3D pointcloud. Both will converge if implemented properly.
ok ,thank you very much!
def binning(lower_bound, data, quantity):
width = round(len(data) / quantity)
quantity -= 1
bins = []
for low in range(lower_bound, lower_bound + quantity * width + 1, width):
bins.append(min(data[low:low + width]))
return np.array([bins])
laser_state = binning(0, laser_state, 20)
I would also like to ask, is it possible to use the code shown above, that is, the code provided in your 18.04 branch to process 2D lidar data as input?Sorry, my English is poor.
Yes it should. Calling the function with laser_state = binning(0, laser_state, 20)
where laser_state
the list with your distance values should do the trick and give you 20 binned values.
Ok,thank you! Otherwise, because I change the robot model, so if I want to change the range of action[0] to [-0.2~0.8], Will this affect the code(convergence)?
I do not know what you mean by range of action. You mean linear velocity would change from range [0, 0.5] to [-0.2, 0.8]? While I do not see major issues with changing the linear velocity limits, it is up to you to find if it will converge with the given reward function. Since you are implementing a different robot, you might need to find your own set of parameters that help convergence. Since I do not know how and what you are implementing, I cannot give any estimate if it should work or not.
ok,thank you very much! Because I don't converge after changing the robot model, I would like to ask which parameters I can adjust to make it converge? please advise Thank you in advance !
There generally are no additional parameters besides the usual DRL hyperparameters that you could change. If changing seed does not help you, I would suggest looking into the code to see if there are any mistakes in the implementation. You will have to share the code to figure that out as I do not have enough information about your implementation to make a guess.
def binning(lower_bound, data, quantity):
width = round(len(data) / quantity)
quantity -= 1
bins = []
for low in range(lower_bound, lower_bound + quantity * width + 1, width):
bins.append(min(data[low:low + width]))
return np.array([bins])
data = rospy.wait_for_message('/r1/front_laser/scan', LaserScan, timeout=0.5)
laser_state = np.array(data.ranges[:])
laser_state[laser_state == inf] = 10
laser_state[laser_state > 1] = 1
laser_state = binning(0, laser_state, 20)
I change laser_state from velodyne-16 into '/r1/front_laser/scan', but it can not convergence.Why? Theoretivally, it should converge. Thank you in advance!
Please provide code of all the changes you have made. You mention changing the robot model. That might be quite important as well.
laser_state[laser_state > 1] = 1
Why are you essentially clamping every laser reading above 1 meter distance? This will probably make the laser readings not usable for learning.
ok, thank you I delete it and try training
May I ask why cmd_vel is sent twice in the step function? Thank you in advance!