Closed ajay1606 closed 3 months ago
Yes, I have tested it with Spot. It is the path from the current pose to the goal pose and only past poses from the delay from the computation.
I extracted the next pose in the path with a path-following script that keeps track of already past poses. I believe the path_to_pose.py
script does this but I have not used that one for a long time so it may not work. I have a position controller, which I have used bout a PID and an NMPC (https://ieeexplore.ieee.org/abstract/document/9885187).
The controller sends velocity commands to spot (cmd_vel
) using this spot ros driver https://github.com/heuristicus/spot_ros.
If you get the path in Spots coordinate frame do I believe it is possible to send the position directly to Spot and let Boston dynamics controller solve the motion this has not been tested.
@grammers Thank you so much for quick response, currently i have tested with Position directly and able to control the spot with geometry_msgs/Pose
data. From path_to_pose.py
script i couldn't find the exact pose that need to feed to robot for next control step. Because, /dsp/path
and/dsp/splinepath
, keep accumulating all the poses in the robot trajectory. Is there any topic that can publish only next control step pose
.
In this following lines, always pose from the first index chosen, would you please clarify it once ? Is this is recent pose ? https://github.com/LTU-RAI/Dsp/blob/4ee9da61b599a7715edeece21f4acab688927488/src/dsp/path_to_pose.py#L35-L39
As a single pose, is it the output from path_to_pose.py script. The dsp/path and splined path is not an acumelation. It is a path, a list of points leading from current positio (robot odometry) to the goal_point. So, following the points in the list in the path one after the other will lead to the goal. Due to update time, the robot can traverse the parts of the path before it is updated and, therefore, looks as if it is accumulating old poses.
I will take a closer look at the path to point script on Monday
@grammers Thank you so much, then each time dsp/path
and splined path
updated with current position of robot, desired goal.
Also poses retrieved from the trajectory in path_to_pose.py
doesnt includes orientation data. Even velocity
published from path_to_pose.py
shows only position data, orientation data is zero.
Happy weekend ! Appreciate your response next week.
Path_to_pose.py
is old and poorly written but it takes the path dsp/path
(can be changed to dsp/splinepath
). And based on the robot position odometry/imu
is it finding the next yet not visited point (within radie of WP_SIZE
) in the path. The next point to visit in the path is published on topic /shafter3d/reference
(in code is the publisher called vel_publisher only to confuse, no velocity is published). /shafter3d/reference
should be subscribed to by a position controller.
The lines you referring to are for resetting the index counter when a new path is received.
Yes, it is only position data without any orientation. Following is a cpp example of how an orientation can be set from the current position (odom) to the next waypoint (point).
double x = -odom.pose.position.x + point.x;
double y = -odom.pose.position.y + point.y;
double z = -odom.pose.position.z + point.z;
double pith = asin(z);
double yaw = atan2(y, x);
double rol = atan2((-y+x)/sqrt(y*y + x*x), (3) / sqrt(x*x + y*y + (-y*y+x*x)*(-y*y+x*x)));
double c1 = cos(yaw / 2);
double c2 = cos(pith / 2);
double c3 = cos(rol / 2);
double s1 = sin(yaw / 2);
double s2 = sin(pith / 2);
double s3 = sin(rol / 2);
geometry_msgs::Quaternion quat;
quat.w = c1 * c2 * c3 - s1 * s2 * s3;
quat.x = -s1 * s2 * c3 + c1 * c2 * s3;
quat.y = s2 * c1 * c3 + c2 * s1 * s3;
quat.z = c3 * c2 * s1 - s2 * c1 * s3;
Hope this clarify!
@grammers Thank you so much for your response with code. Really, greatly appreciate it. I will test and comeback to you.
@grammers After few workaround i able to navigate the robot in the path generated by the this planning algorithm But strangely, trajectory takes sudden turn always, though there is straight free space.
Herewith sharing video depicting the sample path following. So robot keep struggling to move forward.
https://github.com/LTU-RAI/Dsp/assets/29722843/f0765a78-d6dd-4727-8400-70aab0d6d764
Would you please give some tips to overcome this ?
Glade to hear that you got it to work. It is hard to tell with certainty why the path is snaking. My best guess is that it has to do with the risk layer and how it is applied when the map is real (imperfect). The option is that it is some residue when updating the map, but to say that I would have to test with a bag of the experiment, and using the dsp/occupancy_map
topic and modefy it in the bottom of dsp.cpp
to visualize the risk layers.
@grammers Thank you so much for your response.
Some times trajectory shows variation in pitch
where I expecting only in yaw
. So should i consider to reduce the value for the variable risk_
, seems its already minimum (currently its 2
) i guess.
Case: where trajectory moves below the voxels.
Case: where trajectory snaking.
I would greatly appreciate if you could check once my bag file here
Topis: Odometry: /robot/odom LiDAR: /points_raw
Very much thankful for your assistance.
Because you're halfway is 2 voxels
wide in many places and the risk layer is set to 2 voxels
wide is it working poorly. To try and get after paths can DSP utilize imperfections in the map to go through flore because DSP found cheaper paths.
Solutions, lowering the voxel size so there is at least one free voxel in the model not affected by the risk, with risk 2
is 5 voxels
width needed. this will be more taxing on memory and also computation power.
Lowering the risk is possible but you lose the benefit of DSP when doing so, 1
is ok but due to how paths in corners are calculated will you risk kolissions there. It is possible to run with risk = 0
but then is it the same as running a normal D* (almost) algorithm.
The snaking comes from the splining of the path. I have not optimized it so I can't explain why it is doing it like that. My guess is that it tries to make a "smooth" path (no sharp corners) and avoiding to cut corners. With that sharp corners in this resolution looks like it snakes out of the corner.
@grammers Thank you so much and as you suggested i tried with the risk = 0
and observed trajectory much better than before and there were not much snaking issues.
BTW, is there any fix for this, every first step starts like this, with small turn. I didnt understand, due to this initial robot movement starting very strangely.
Ther can be a random occupied or unknown voxsel somewhere on the path, making the right side be preferable, and then it randomly decides to move over emidietly. But I don't have a great explanation for it.
Risk = 0 can also have a of by 1 error that only appears with 0. I have not tested mutch with risk 0, so it is possible.
@grammers Thank you so much, seems those empty cells created from blind spot region of LiDAR. Though I am using pre-constructed .bt map file, the map continuously getting updating with empty cells near robot. That causing path take some turn to find some non-empty cells.
Any suggestion !
Tuning of octomap or running on pre-build map with no updates. After that is is it used to use a different mapper, but there is no other than octomap maps supported by DSP, which can be added, but that requires some work.
@grammers Sometime planner taking path outside octomap, I have tried with setting different risk
param. But no change, path goes out of map. Are there any parameter to tune ?
Thanks in advance !
Risk is the parameter that is supposed to solve this. but because the corridor is so narrow will it be filled with risk and I guess that causes the path (fore what DSP knows) through the unknown is cheaper. The proper solution (once again) is to have a smaller voxel size so there exists free space (with no risk) inside the corridor. But that is more taxing on RAM and CPU.
You can check the octomaps free space in rviz to verify that the map is correct.
@grammers
Though there is enough free space, wondering path generating close to the wall ? I have tried lot by tuning octomap params (resolution
and maxrange
and minrange
). But no useful.
Also, generated trajectory looks like for aerial robot, is there any parameter to modify for the ground robot ?
Please would share some tips ?
Looks like risk = 0
putting it to 1 or 2 (or something) should create a safety marginal.
For the ground robot, you would desire a 2D map (nav_msgs/occupancy_grid) instead of octomap. and set the launch parameter use_3d = false
. lately, have I used https://github.com/LTU-RAI/traversability_mapping to get a 2D map to plan paths for ground robots.
@grammers Thank you so much, you made my day. For the first time, tested with SPOT robot , path planning really working good. But it seems, real time processing quite heavy for the map currently I am using. And another problem was, path update rate. By default, path was updating very frequently (with every sec). Because of that, spot controller taking same input multiple time. Now frequency reduced to 3 sec, based on the speed of SPOT. Now its matching each updated nicely.
Also, I tried to use 2D map that suggested in previous response but map is not building. 2D grid map accumulating in the same location. Seems my odom topic not properly configured. I am hoping for 2D map would be much faster in generating path.
Currently, my odom and lidar inputs are in base_link
, tf publishing world to base_link
.
In the launch file, there is no way to input odom, and map is not updating. Would you please share your view on this ?
For the setStart
to work automatically based on Odom position, should i need to comment this line ?
https://github.com/LTU-RAI/Dsp/blob/4ee9da61b599a7715edeece21f4acab688927488/src/dsp/dsp.cpp#L500
Thanks in Advance.
The update rate is there to reduce the issues you described where the robot may backtrack or redo the same thing multiple times. Ultimately is it a balance between update rate, robot seed, and path tracking feed-forward (WP_SIZE).
The computationally heavy part is planning map building or when the map is expanding. This scales with the number of voxels. A 2D map has naturally fewer voxels so it is significantly faster to build the planing map. The same goes for RAM usage. The actual path generation time is relatively small and is probably not significant but it is also affected of resolution and distance.
The map I linked uses an external Odom source, The not building has to do with config, I use the offline.launch
script (for online on the robot in contrast to the name). Any 2D map will work, even octomaps projected map (but that is trach indoors).
DSP is set to use tf as the start position instead of odom topic because of msg delivering timings that can cause the path to start from an old position resulting in the robot backtracking. But in older versions was odom topic used /dsp/set_start
.
To use Odom topic and keep the update frequents should this be commented, and line 441 uncomented.
https://github.com/LTU-RAI/Dsp/blob/4ee9da61b599a7715edeece21f4acab688927488/src/dsp/dsp.cpp#L525
You may need to add a test if start_pos
has been set.
If you desire a path to update with frequent of Odom is it lines 441
that should be uncommented and also line 594 commented.
https://github.com/LTU-RAI/Dsp/blob/4ee9da61b599a7715edeece21f4acab688927488/src/dsp/dsp.cpp#L594
@grammers With your continuous support, we were able to finish our demonstration successfully. Thank you so much for your kind help.
@grammers Thanks for the nice contributions.
I have been testing some other work so i couldn't get back to this. Previously you helped me to get this planner work with my data. Now I am ready to deploy it with my robot (SPOT). I am able to get trajectory as you mentioned
/dsp/path
and/dsp/splinepath
.But this path includes past and current trajectory information. While sending robot , how to do you sending pose information to robot at each control step.
By the way, have you tested this with SPOT ?