Closed Severn-Vergil closed 8 years ago
Hi,
I think that since the warning refers to explore_costmap/explore_boundary
, this message actually originated from the frontier_exploration
node that is ran to support the node provided by this package. Take a look at this issue, too: https://github.com/paulbovbel/frontier_exploration/issues/21
You could check to make sure, if this is the case it may be required to modify that node to be able to handle such costmap resizing. Do you also get the same warnings if you use another SLAM method?
The default parameters should be mostly fine for a stock Turtlebot. Of course, it helps if you tune the parameters to match those than in your physical system (e.g., laser scanner, robot velocity, etc.). If you have a powerful computer, you could try to increase the horizon or number of samples a bit for better exploration performance. It's hard to give suggestions that work in general, as the exploration performance depends on many things, including the environment features.
Thank you, @laurimi I've tried your simulation system using gmapping and teb_local_planner, and I find the costmap error still occur sometimes. But a more complex problem expose: the exploration goal keep stucking in the boundary, as the video shows: https://youtu.be/VQIrX6x1TfM And the problem occurs in the real world exploration process as well.
Here's my exploration planner parameters:
map_frame_id: map base_frame_id: base_link
goal_execution_time: 10.0 min_traj_length: 0.2 min_reward: 60.0 max_sampling_tries: 30 frontier_distance_threshold: 1.0 wait_between_planner_iterations: false
lin_v_min: 0.2 lin_v_max: 0.5 ang_v_min: -0.2 ang_v_max: 0.2 f_rot_min: -0.02 f_rot_max: 0.02
horizon: 5 discount: 10.0 schedule_a: 4 schedule_b: 3 num_kernels: 5 std_vel: 0.2 std_ang: 0.1 std_fr: 0.02 num_particles: 10 resample_thresh: 0.33
allow_unknown_targets: true default_ctrl_duration: 1.0
laser_min_angle_deg: -90.0 laser_max_angle_deg: 90.0 laser_angle_step_deg: 5.0 laser_max_dist_m: 4.0 p_false_pos: 0.05 p_false_neg: 0.05
---------------------------------- I set 'discount' as a high value, in order to let robot to explore new area, not just keep stuck in the obstacle to complete the 'boundary', is it right? And I've increased the 'horizon' . Thank you in advance!
Setting discount high will basically do what you say. But as a side effect, it will make the all estimated rewards very high as well. So the threshold value min_reward
probably needs to be increased as well. I had not actually considered the discount might be set higher than 1.0, but it is certainly possible. Getting stuck at obstacles could also be caused by a faulty configuration of the navigation costmaps.
Thank you! I've tried to print avgRewards and modify min_reward carefully, it works! A new question: as the optimized path is determined by both map and observation data(laser scan), is it possible to generate a desired goal with roughly inverse orientation to the origin robot pose? Sometimes the robot explores into a corner and cannot generate a path to escape from it, and frontier_exploration triggers. However a goal generated from frontier_exploration may also stuck in inflated obstacle like this, I will try to ask the author of frontier_exploration.
A related issue with frontier_exploration problem: https://github.com/paulbovbel/frontier_exploration/issues/11
Yes, this may happen with frontier_exploration
, you can also attempt to work around it by tuning the parameters for the costmap applied in frontier_exploration
. Not exactly sure what you mean by a goal with inverse orientation, but if you want to modify the goals output by ase_exploration
you either need to change the code directly or write another node that does whatever processing you need before sending the goal further to move_base
.
I mean, if a robot stuck in the corner with unexplored area behind, can the SMCPlanner generate a goal to the unexplored area like this:
(Too rough picture, hope I can show my thought clearly :) )
Another question: If I want to 'send goal' to axclient as soon as the exploration task failed, in order to generate a new goal from SMCPlanner or frontier_exploration and exploring 'endlessly', which tool can I use to implement this? Thank you in advance!
Yes, it is possible to generate goal like that. The reason why you are not getting this is probably that you have set the robot's motion model configuration such that this rarely if ever happens. Try for example increasing the range of ang_v_min
and ang_v_max
to sample from a wider range of angular velocities.
Thank you, @laurimi, it works! Sorry for the late reply I'm trying to figure out the trajectory evaluate function from the code as I'm not able to understand the formulas from the paper, here are the questions:
The entropy is a measure of uncertainty related to a random variable. The random variables here are the map cell occupancies. They are random variables with two possible values, 0 for a free cell and 1 for an occupied cell. A single real number between 0 and 1 models the probability of a cell being occupied. The array for entropy that you see is just a computationally efficient way to compute the trajectory rewards. ROS uses occupancy grid maps where cell occupancy probability is represented by an 8-bit integer, between 0 and 100. The array is a lookup table for the corresponding entropy value, so the values don't have to be calculated every time.
To modify how trajectories are scored, you would have to write your own version of trajectory evaluation. Take a look at TrajectoryEvaluator.cpp
- here the reward at each step is the mutual information, or, the difference between prior and posterior entropies. It should be possible to change this to whatever is appropriate in your application.
Thank you! I've figured out the meaning of entropy, E(p) = -plog(p) - (1-p)log(1-p), is it right? The TrajectoryEvaluator.cpp used map and current robot pose, but I didn't find out where the scan data has been used, as I try to find out the corresponding code of the algorithm 3"Observation sampling for a laser range finder(LRF)" in the paper.
Yes that is the entropy of a binary random variable. Scan data is simulated based on the current map information and the information in the current map sample. Sampling the scans is the responsibility of the sensor model. For laser scanners, this is done via 2D raytracing, see LaserScanner2D::sampleMeasurement
for the details. This is pretty close to Algorithm 3 of the paper.
So the scan is simulated randomly, not using actual data from Kinect, is it?
By the way, what does 'map_sample
' mean, as the input of LaserScanner2D::sampleMeasurement
? What's the difference between map_sample
and map_prior
? It seems that in evaluateTrajectory
from TrajectoryEvaluator.cpp
, map_sample
is directly copied from map
, as the initialization shows:
PlanarGridOccupancyMap map_sample(map.getOrigin(), map.getMetersPerCell())
;
Correct, this node does not use any sensor data at all. Only information about the current map.
The map_sample
is exactly that, a sample drawn from the probability distribution corresponding to the current map information. It is iteratively created through simulation of a trajectory. Note that to initialize it only the origin and resolution of the map
are used, otherwise map_sample
is empty. map_prior
can be applied to encode any other prior information not already contained in the current map.
Closing this as the original issue was resolved.
Hi Lauri, I'm trying to use your exploration algortihm with the map generated by RTABMAP SLAM method on turtlebot platform in real environment. After the turtlebot explores some area(e.g. 5 square metres), the exploration planner will keep warning like this: Here's the video record: https://www.youtube.com/watch?v=J7WNhbbr0RQ Costmap keep resizing as the map update problem occurs. I use teb local planner instead of DWA local planner. Could you provide me some suggestions on parameters tuning? Thank you in advance! And the exploration method is definitinely the "smartest" human like method I've ever seen! Thank you for writing and publishing this algorithm.