Closed nickvaras closed 5 years ago
Thanks for checking out the package, @nickvaras --you've shared some interesting ideas here.
If you don't mind me asking, could you share a few more concrete examples of the situations you've described? It might turn out to be the case that you will be able to sufficiently address them by tuning some parameters.
So the situation that I'm trying to approach is akin to the following:
"Bobby is carrying a big box between his arms, so that he cannot see his feet. He's walking forward and there's a tripping hazard coming ahead in front of him. He comes to a stop just a step from it. He cannot see the hazard any longer, nor his feet, just the box, however he remembers that the thing is there. He must not forget the hazard is there, or he would fall when he takes another step. In the interest of his dental health, he should never forget the hazard, as long as it's in that critical area where he has no visibility."
My understanding of the STVG is that there is a global decay rate (which can be set to zero), and there's also the posibility of defining a frustum that will accelerate decay of voxels that happen to be inside its volume.
I'm currently using a VLP16 monted horizontally on a robot with 360 deg horizontal FOV, but only a +-15deg Vertical field of view. So the hourglass shaped volume directly centered on the lidar is the area that should freeze decay, forevah. If the robot moves and the voxels are no longer in these cones, decay can be resumed.
I understand only the frustum volume has been enables so far and that's fine. However, even within that frustum, could the decay be set to negative or zero?
I would appreciate it enormously @tasilb if you can further explain these parameters that you mention that would accomplish this.
Thanks!
You bring up a reasonable point here. The first steps would be to first implement a frustum model for the VLP 16 lidar. Right now, we only support camera-like frustums, as one of my call-to-actions in my ROSCon talk.
I could help you do that and integrate it, but generally this capability isn't strategic to our company to have cycles to do it myself. Essentially step one is subclassing the existing frustum model + implementation of this VLP model, and choosing which to use as a parameter. The abstract class should just need a IsInside
SetPosition
, SetOrientation
SetParams
methods. This model would look at the point's azimuth, range, and elevation to decide if in the frustum or not (which can be added to the measurement buffer or we can template the buffer).
Step two is then making a new function to call when iterating over that looks for the inclusion of the point in a virtual sensor model to do work on. For your example, you'd want to look for outside the azimuth or elevation, but within the desired range.
Thanks Steve! First of all, I want to again praise this development. The addition of voxel decay and the performance improvement in this package really make it a massive contribution.
I been thinking about this a lot lately. Have you ever crawled under a
table or similar structure to do something, and when you finished you stood
up and bumped your head because you forgot the
But in all seriousness, when something that potentially lead to collission goes out of view, it's essential that it is remembered indefinitely. I can achieve that with the original voxel grid, but besides being a huge computational burden, the ray tracing implementation is flawed and fails to clear obstacles properly, leaving ghost voxels all over the place.
I look forward to work on this and accept your offer for help. I do however remain unclear as to wether the current implementation can accept negative acceleration. From what I saw on the examples, the suggested way to deal with increasing clutter is to increase the frustum accel, but it never exceeds the global decay.
What folks in my case might need might be the opposite, where the decay
inside the
Anyway, will pepper you with more questions soon...
Cheers!
On Fri, Nov 16, 2018, 4:11 PM Steven Macenski <notifications@github.com wrote:
You bring up a reasonable point here. The first steps would be to first implement a frustum model for the VLP 16 lidar. Right now, we only support camera-like frustums, as one of my call-to-actions in my ROSCon talk.
I could help you do that and integrate it, but generally this capability isn't strategic to our company to have cycles to do it myself. Essentially step one is subclassing the existing frustum model + implementation of this VLP model, and choosing which to use as a parameter. The abstract class should just need a IsInside SetPosition, SetOrientation SetParams methods. This model would look at the point's azimuth, range, and elevation to decide if in the frustum or not.
Step two is then making a new function to call when iterating over that looks for the inclusion of the point in a virtual sensor model to do work on. For your example, you'd want to look for outside the azimuth or elevation, but within the desired range.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-439528471, or mute the thread https://github.com/notifications/unsubscribe-auth/AZn4SLOMi95T6eRVC8yPubkaD0zwcHfpks5uvynzgaJpZM4YXdK- .
when something that potentially lead to collission goes out of view, it's essential that it is remembered indefinitely
That breaks the entire paradigm of what this package accomplishes. We should not be remembering things out of view (20+ meters away, etc) its a bad use of memory and those objects are possible/likely to have moved as part of a dynamic environment that robots are subject to. If they stay still for a very long time, they're usually part of the map itself.
If your robot is crawling under tables, then you should probably have sensors that indicate if something is above it so it doesn't 'bump' its head on the way back up. This can also be achieved by parameter tuning so that your global decay isnt so high that you're forgetting things so fast this occurs. The default values assume good coverage - if that's not the case, you'll want to increase your global decay. Sufficient sensor coverage is critical for safe robot navigation, however for research, that cannot always be achieved, and in this case increasing the global decay should allow you to remember enough of your environment that this shouldnt happen.
As to negative acceleration
, I'm not sure that phrase makes sense. This implies the older you dont see something the more time you leave it in the map. What you've proposed in your descriptions is just having additional frustums you can define with different acceleration or decay parameters, which is a pretty good idea for cases that you have described, or to describe obstacles in the sensor deadzones. I'm not sure what you mean by the acceleration cannot exceed the global decay. These are unrelated models and can be tuned as you like.
You'll have to give me some particular example (real robot, real situation) that you're trying to accomplish, but from what you've said here, your issues are primarily with parameter tuning and potentially a new feature to create virtual frustums with different model parameters
"when something that potentially lead to collision goes out of view, it's essential that it is remembered indefinitely"
Didn't explain myself clearly. Please interpret the above as "obstacle has made it into sensor dead zone". We both agree that far away voxels are irrelevant. The ability to forget voxels seen a mile away is coolest things about this package.
"Negative acceleration" indeed makes no sense. "Paused Decay" is a better denomination. If you say that a frustum can have local decay of zero (i.e, never remove) even if the global is set to, say, 5 seconds, that's good news, so the only challenge is to define the new geometries...
Yes, one common approach in robotics traditionally has been to put sensors to cover every angle, but many creatures seem to get by just fine with limited field of view, as they the ability to remember what's relevant. My robot does not go under tables, that was a very general example (and can think of heaps others). But relying on huge observation persistence in order to deal with things going into the sensor deadzone is not an approach I'd use as any moving object will draw impassable walls in the costmap and obliterate navigation.
A better example of what I'm after is this: Say you just drove somewhere and begin to park (head-in) close to the curb (assume you don't drive a Model S that warns with its radar you when you are about to hit the curb). As you drive forward the front of your car obscures the curb and since you can't see it any longer, you have to rely on your visual memory to avoid hitting it with the bottom of your front bumper and at the same time park properly within the spot.
My robot in particular has to be able to navigate unstructured industrial environments that may be littered with low obstacles such as empty pallets, etc. Even worse, the robot might need to park for indeterminate periods of time while it completes a task, so increasing observation persistence is not a valid approach to avoid forgetting potentially dangerous non-static obstacles.
From what you say, it seems that the virtual frustum is the way to go to accomplish this. I hope that I've corrected myself enough to make it clear that it was about what you call "sensor deadzone" all along, never about out-of-view as in beyond sensor range, etc...
Thanks for taking the time to look at this!
In your scenario, would a person ever be able to push a pallet into your robot's blind spot while it's parked? If so, that sounds like a potential collision that we unfortunately can't help you with.
This situation does make sense. What is sounds like you want is to create a virtual frustum with different parameters (in which you can make the parameter for global decay for instance = 0).
I think my comment above for how to implement is a good starting point. First, you need a frustum model for the VLP16 that you're using so that we can decide what the dead zone is. Second, then we need to define a new method for looking for regions in this model's deadzone (and the camera frustum dead zone model I can take care of) and applying a different set of parameters.
The only big downside I see of this is if you get up close to things in the dead zone, then they move, that could cause collision, and if you get up close to things in the deadzone and get surrounded, then you're totally in trouble because you're in a stalled position you cant get out of, expect clearing costmaps which you could have as a recovery. Then you end up back where you started with not seeing the obstacle in your deadzone. But its an improvement for what you're looking for
You both have valid points about obstacles moving while already in the blind spot. It's akin to that chair removal prank...I'm not too concerned about that scenario because I have 2d safety lidars covering a horizontal plane a few inches above the floor, so unless it's a gerbill trying to get squashed, this is somewhat less likely. My main concern is not forgetting small obstacles that the robot moves close by. I'll look at how this could be implemented based on the suggestions, hopefully it is not too difficult.
Cheers!
On Nov 19, 2018 5:26 PM, "Steven Macenski" notifications@github.com wrote:
This situation does make sense. What is sounds like you want is to create a virtual frustum with different parameters (in which you can make the parameter for global decay for instance = 0).
I think my comment above for how to implement is a good starting point. First, you need a frustum model for the VLP16 that you're using so that we can decide what the dead zone is. Second, then we need to define a new method for looking for regions in this model's deadzone (and the camera frustum dead zone model I can take care of) and applying a different set of parameters.
The only big downside I see of this is if you get up close to things in the dead zone, then they move, that could cause collision, and if you get up close to things in the deadzone and get surrounded, then you're totally in trouble because you're in a stalled position you cant get out of, expect clearing costmaps which you could have as a recovery. Then you end up back where you started with not seeing the obstacle in your deadzone. But its an improvement for what you're looking for
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440064416, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yaLwvdRA3wt0PWjTfmTyyo9WP0ioks5uwzAcgaJpZM4YXdK- .
If this interests you, I can work on getting the Frustum model to derive from a base class so that you can implement a class for VLP16 as a start
That would be awesome!
On Mon, Nov 19, 2018, 7:57 PM Steven Macenski <notifications@github.com wrote:
If this interests you, I can work on getting the Frustum model to derive from a base class so that you can implement a class for VLP16 as a start
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440097180, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8ySbc4hoJavaUJGW9GRXKKWhPMJuCks5uw1OIgaJpZM4YXdK- .
That would allow, if I understand well, any arbitrary volume to be used (as long as a class definition has been created for it), right?
On Mon, Nov 19, 2018, 8:41 PM Nick Varas <nickv@waypointrobotics.com wrote:
That would be awesome!
On Mon, Nov 19, 2018, 7:57 PM Steven Macenski <notifications@github.com wrote:
If this interests you, I can work on getting the Frustum model to derive from a base class so that you can implement a class for VLP16 as a start
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440097180, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8ySbc4hoJavaUJGW9GRXKKWhPMJuCks5uw1OIgaJpZM4YXdK- .
Because just to be clear, more than the vlp16 "donut" (for lack of a better word), the volume I'm interested in modelling is the hourglass shape centered around the vlp that defines its blindspot.
On Mon, Nov 19, 2018, 8:43 PM Nick Varas <nickv@waypointrobotics.com wrote:
That would allow, if I understand well, any arbitrary volume to be used (as long as a class definition has been created for it), right?
On Mon, Nov 19, 2018, 8:41 PM Nick Varas <nickv@waypointrobotics.com wrote:
That would be awesome!
On Mon, Nov 19, 2018, 7:57 PM Steven Macenski <notifications@github.com wrote:
If this interests you, I can work on getting the Frustum model to derive from a base class so that you can implement a class for VLP16 as a start
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440097180, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8ySbc4hoJavaUJGW9GRXKKWhPMJuCks5uw1OIgaJpZM4YXdK- .
yes, if you watch the end of my talk, you'll hear my recommended approach to implementing it
Will watch it again.
On Mon, Nov 19, 2018 at 5:58 PM Steven Macenski notifications@github.com wrote:
yes, if you watch the end of my talk, you'll hear my recommended approach to implementing it
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440108762, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yRMX26MsraHH4HogIxzPPWaP1Xy8ks5uw2G_gaJpZM4YXdK- .
Just watched it again. What a great talk, outstanding really. Also, turns out the first question asked in the Q&A is the same issue I bring up here, so I'm clearly not the only one that is kept from adopting this due to the risk of obstacles decaying while in the dead zone. For a moment I thought the velocity scaling could be what I needed (i.e., if the robot slows down, slow down decay, all the way down to a pause... ) but quickly realized how that would lead to constant accumulation of obstacles until the robot is completely trapped...
On Mon, Nov 19, 2018 at 6:22 PM Nick Varas nickv@waypointrobotics.com wrote:
Will watch it again.
On Mon, Nov 19, 2018 at 5:58 PM Steven Macenski notifications@github.com wrote:
yes, if you watch the end of my talk, you'll hear my recommended approach to implementing it
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440108762, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yRMX26MsraHH4HogIxzPPWaP1Xy8ks5uw2G_gaJpZM4YXdK- .
now that you mention it, yes, they did. Generally we try to design robot so that there's not a deadzone in meaningful places (or have other sensors overlap inside of other sensors' deadzones) to prevent this being an issue.
Cool.
I'll go back into looking at the code and try sketching the new model assuming this as a starting point:
"...subclassing the existing frustum model + implementation of this VLP model, and choosing which to use as a parameter. The abstract class should just need a IsInside SetPosition, SetOrientation SetParams methods. This model would look at the point's azimuth, range, and elevation to decide if in the frustum or not...then making a new function to call when iterating over that looks for the inclusion of the point in a virtual sensor model to do work on."
Will have more questions soon!
On Mon, Nov 19, 2018 at 6:55 PM Steven Macenski notifications@github.com wrote:
now that you mention it, yes, they did. Generally we try to design robot so that there's not a deadzone in meaningful places (or have other sensors overlap inside of other sensors' deadzones) to prevent this being an issue.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440119200, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yWn_RhC4xyzO4wTwFwnleFwH66XBks5uw28ggaJpZM4YXdK- .
@nickvaras See this PR https://github.com/SteveMacenski/spatio_temporal_voxel_layer/pull/93. I just decided to do it when I got home.
It shows how I made the depth camera model a derived frustum from a base class and how I integrated into the package. Please follow the same steps for the VLP16 model. I lay them out in the PR as well.Once we have that, we'll look at how to do the deadzone modelling
This ticket actually covers that development https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/25 conveniently enough
Thanks! I'm looking into it.
On Tue, Nov 20, 2018 at 1:49 AM Steven Macenski notifications@github.com wrote:
@nickvaras https://github.com/nickvaras See this PR #93 https://github.com/SteveMacenski/spatio_temporal_voxel_layer/pull/93
It shows how I made the depth camera model a derived frustum from a base class and how I integrated into the package. Please follow the same steps for the VLP16 model. I lay them out in the PR as well.Once we have that, we'll look at how to do the deadzone modelling
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-440163363, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yZA25N-0EG-QUT3If3XmDVbyVXJJks5uw6X3gaJpZM4YXdK- .
@nickvaras the branch builds. You can implement the VLP model now
Thank you. Did you find out why it was making move_base crash?
On Mon, Nov 26, 2018, 8:44 PM Steven Macenski <notifications@github.com wrote:
@nickvaras https://github.com/nickvaras I'm going to fix the frustum models PR now. If you want to branch off of it and work on the VLP model, you can do that and just pull back in my changes. It'll probably take you a few days to finish the model anyhow
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-441866339, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yToOUpQKnSZIWiNeLRxAwmWsqqr8ks5uzJjtgaJpZM4YXdK- .
Just saw the fixes, nevermind. Will pull the update and resume work on the vlp frustum.
On Mon, Nov 26, 2018, 10:41 PM Nick Varas <nickv@waypointrobotics.com wrote:
Thanks you. Did you find out why it was making move_base crash?
On Mon, Nov 26, 2018, 8:44 PM Steven Macenski <notifications@github.com wrote:
@nickvaras https://github.com/nickvaras I'm going to fix the frustum models PR now. If you want to branch off of it and work on the VLP model, you can do that and just pull back in my changes. It'll probably take you a few days to finish the model anyhow
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SteveMacenski/spatio_temporal_voxel_layer/issues/88#issuecomment-441866339, or mute the thread https://github.com/notifications/unsubscribe-auth/Agf8yToOUpQKnSZIWiNeLRxAwmWsqqr8ks5uzJjtgaJpZM4YXdK- .
Yeah, apparently iterators dont like it when you mess with their internal structure :)
It still crashes my move_base process, even after recent fixes. As a check, I switch to kinetic-devel, recompile, source and runs fine. But frustum-models is not being friendly with move_base, even with the same parameters, which are:
stvl_obstacle_layer:
enabled: true
voxel_decay: 0.5 #seconds if linear, e^n if exponential
decay_model: 0 #0=linear, 1=exponential, -1=persistent
voxel_size: 0.05 #meters
track_unknown_space: true #default space is unknown
observation_persistence: 0.0 #seconds
max_obstacle_height: 2.0 #meters
unknown_threshold: 15 #voxel height
mark_threshold: 0 #voxel height
update_footprint_enabled: true
combination_method: 1 #1=max, 0=override
obstacle_range: 3.0 #meters
origin_z: 0.0 #meters
publish_voxel_map: true # default off
transform_tolerance: 0.2 # seconds
mapping_mode: false # default off, saves map not for navigation
map_save_duration: 60 #default 60s, how often to autosave
observation_sources: point_cloud_mark
point_cloud_mark:
data_type: PointCloud2
topic: /points
marking: true
clearing: false
obstacle_range: 6.0
min_obstacle_height: 0.0 #default 0, meters
max_obstacle_height: 2.0 #defaule 3, meters
expected_update_rate: 0.0 #default 0, if not updating at this rate at least, remove from buffer
observation_persistence: 2.3 #default 0, use all measurements taken during now-value, 0=latest
inf_is_valid: false #default false, for laser scans
clear_after_reading: true #default false, clear the buffer after the layer gets readings from it
voxel_filter: false
Could it have anything to do with not having the clearing observation source? It's not a problem with kinetic-devel
BTW, it crashes with [move_base-8] process has died [pid 21359, exit code -11,
, .
I found this ROS Answers entry, which is also voxel_grid related, wonder if there any relation...
https://answers.ros.org/question/299935/process-has-died-exit-code-11/
Some edited bits from the crash report:
DistroRelease: Ubuntu 16.04
ExecutablePath: /opt/ros/kinetic/lib/move_base/move_base
Signal: 11
Uname: Linux 4.15.0-39-generic x86_64
Stacktrace:
volume_grid::SpatioTemporalVoxelGrid::TemporalClearAndGenerateCostmap(std::vector<volume_grid::frustum_model, std::allocator<volume_grid::frustum_model> >&) () from /home/nickvaras/stvl/devel/lib//libspatio_temporal_voxel_layer.so
volume_grid::SpatioTemporalVoxelGrid::ClearFrustums(std::vector<observation::MeasurementReading, std::allocator<observation::MeasurementReading> > const&) () from /home/nickvaras/stvl/devel/lib//libspatio_temporal_voxel_layer.so
spatio_temporal_voxel_layer::SpatioTemporalVoxelLayer::updateBounds(double, double, double, double*, double*, double*, double*) () from /home/nickvaras/stvl/devel/lib//libspatio_temporal_voxel_layer.so
#4 0x00007f1db08d37f5 in costmap_2d::LayeredCostmap::updateMap(double, double, double) () from /opt/ros/kinetic/lib/libcostmap_2d.so
#5 0x00007f1db08d602c in costmap_2d::Costmap2DROS::updateMap() () from /opt/ros/kinetic/lib/libcostmap_2d.so
#6 0x00007f1db08d7056 in costmap_2d::Costmap2DROS::mapUpdateLoop(double) () from
StacktraceTop:
volume_grid::SpatioTemporalVoxelGrid::TemporalClearAndGenerateCostmap(std::vector<volume_grid::frustum_model, std::allocator<volume_grid::frustum_model> >&) () from /home/nickvaras/stvl/devel/lib//libspatio_temporal_voxel_layer.so
volume_grid::SpatioTemporalVoxelGrid::ClearFrustums(std::vector<observation::MeasurementReading, std::allocator<observation::MeasurementReading> > const&) () from /home/nickvaras/stvl/devel/lib//libspatio_temporal_voxel_layer.so
spatio_temporal_voxel_layer::SpatioTemporalVoxelLayer::updateBounds(double, double, double, double*, double*, double*, double*) () from /home/nickvaras/stvl/devel/lib//libspatio_temporal_voxel_layer.so
costmap_2d::LayeredCostmap::updateMap(double, double, double) () from /opt/ros/kinetic/lib/libcostmap_2d.so
Title: move_base crashed with SIGSEGV in volume_grid::SpatioTemporalVoxelGrid::TemporalClearAndGenerateCostmap()
Pull in my branch changes and it should be fine. I tested it for 12 hours on hardware and another 24 in an accelerated simulation.
also, I sourced the stvl workspace using source <workspace>/devel/setup.bash --extend
, although I cannot see how the environment could have anything to do with it, specially considering that the original kinetic-devel (before the merge with frustum-models) run fine. Now, after merge #93, kinetic-devel also kills move_base...
I'll look into it later this afternoon. If you have any ideas, I'd be interested in hearing them
Well, I first installed the STVL package with apt-get install, so that should take care of all dependencies like OpenVDB, etc... I'll dig further.
Yeah, but if you were saying that it worked before the frustum refactor, something there might be off. Glancing over it I dont see an issue and I havent seen one. The function your stacktrace points to was the same where it was failing before hand. Did you try cleaning you workspace, pulling, and building again?
Yes, even removed the ws directory, created a new ws dir with a different name, cloned the repo again, catkin-made-it, sourced it as an extension of my robot's workspace and still the same...
Aha! Uncommented the clearing observation source from the parameters, now it runs... So I assume it is mandatory? I have it as a model_type 0 currently, event though I use a vlp16, so it's not doing anything as far as I remember.
Mhm ok, I can start looking into that now. Did you tools give you any more information about the crash?
As model 0 it will try to clear as a depth camera, you just need to not have any frustum acceleration set as your model of a depth camera doesnt exist so you dont want to actually do any until the new vlp16 model is set - you just want good old global decay.
No more useful info on crash report. The reason why it was crashing was because there was no clearing observation. Now that it is added but with with accel set to 0 it runs fine. I'm trying a few things now with the vlp model, will share progress as it happens (or make questions if I get stuck). Thanks!
I dont think we require clearing observations, but the global decay wouldnt work without them. Did it crash immediately or over time?
I do see now that it crashes immedately without clearing observations that's interesting. I'll fix that now.
@nickvaras this resolves https://github.com/SteveMacenski/spatio_temporal_voxel_layer/pull/97
I have never had a usecase without clearing readings but you should be good to go without them now. One cool side effect is that this will now work perfectly fine with global decay without any clearing measurements required now
Alright, so I have a semi-working rough version of the vlp16 model ( see here )
While it works, it works correctly for a horizontally mounted 3d vlp16 with 360 degrees hFOV ( translation is done, but no rotation step yet). However, I must have some sort of memory leak or something, because after a number of minutes (I've timed over 15) the process crashes. Stack overflow or whatever. The ubuntu crash report only goes so far as to tell me that move base died during TemporalClearAndGenerateCostmap()
, so no sure what step inside that. However, because I barely touched that function, it's reasonable to assume that the problem is in my IsInside() function, shown here:
/*****************************************************************************/
bool ThreeDimensionalLidarFrustum::IsInside(const openvdb::Vec3d &pt)
/*****************************************************************************/
{
//VLP Frustum
float local_x = pt[0] - _position[0];
float local_y = pt[1] - _position[1];
// float local_z = pt[2] - _position[2];
float radial_distance = sqrt( (local_x * local_x) + (local_y * local_y) );
// Keep if inside minimum distance or beyond maximum range)
if (radial_distance > _max_d || radial_distance < _min_d )
{
return false;
}
// Keep if outside vFOV
if (fabs(atan((pt[2] - _position[2])/radial_distance)) > _vFOVhalf)
{
return false;
}
return true;
}
If anyone has any idea why it's crashing, please let me know.
@nickovaras @nickvaras (which witch is which :) ) you're also 1 commit behind mine if you want to pull.
Do you set _position
?
(also kill those floats and use doubles)
try adding prints to see what line it crashes on. I'm going to make a good guess and say its the atan line.
many of these variables in your header are defined in the abstract header, you should not duplicate
Not related to your crashing issue, but make sure to include enough variables to do the job, particularly min dist (done) max dist (done) min elevation angle max elevation angle min azimuth max azimuth
(also kill those floats and use doubles)
This makes sense, will try this right away. Will also keep the other suggestions in mind.
@nickovaras @nickvaras (which witch is which :) )
Both me, need to stop procrastinating and consolidate accounts...
you're also 1 commit behind mine if you want to pull.
Pulled to freshest state. Not crashing so far. Been a few hours. :+1:
Do you set _position?
I believe it is set in the SetPosition() step.:
/*****************************************************************************/
void SpatioTemporalVoxelGrid::ClearFrustums(const \
std::vector<observation::MeasurementReading>& clearing_readings)
/*****************************************************************************/
{
{{{{edited}}}}
frustum->SetPosition(it->_origin);
frustum->SetOrientation(it->_orientation);
frustum->TransformModel();
obs_frustums.emplace_back(frustum, it->_decay_acceleration);
}
TemporalClearAndGenerateCostmap(obs_frustums);
return;
}
many of these variables in your header are defined in the abstract header, you should not duplicate
Please elaborate. Not sure which variables you refer to, but I used the depth camera files as a template.
Not related to your crashing issue, but make sure to include enough variables to do the job
I should add more variables later to make it more general purpose, but for my particular horizontally-mounted-vlp16 scenario, the variables defined so far are sufficient. As mentioned before, while it's running, it performs beautifully.
Please elaborate. Not sure which variables you refer to, but I used the depth camera files as a template.
Oops you're right. I'll have to fix that. The _position
_orientation
and _valid_frustum
variables are set in the abstract header, so you don't need to include them in your implementation of that interface, since you have them already there. You might need to change the permissions from private
to protected
. Sorry about that. Oversight on my part.
I'd also recommend deleting all the VISUALIZE_FRUSTUM
and internal stuff unless you're planning on somehow visualizing the frustum - which sounds really hard and unintuitive for the vlp16 so I would just skip it. The spec sheet's pretty clear on these matters.
(also you need to update your header guards LIDAR_FRUSTUM_H_
to be 3D_LIDAR_FRUSTUM_
or similar)
Also set _valid_frustum
to true in TransformModel
as a convention. I know your model doesnt require it but it makes sure that people can follow the same template in the future.
The rest we can cover when you submit the PR
It looks like you implemented the inverted model to make it stay if outside the bounds, how's that working for you? Are you having 1 marking model and then this inverted psuedo-clearing model (only global decay but actualy just keeping marked things above) or something else?
I'd like to see the model be the proper VLP16 model and then we can do what you have as the invert_3_dimensional_lidar
model as well to cover your use-case. If you use the normal frustum acceleration model for the VLP16 you should see some computational improvements and better environmental representations. You can actually use the frustum acceleration!
I have the skeleton for the deadzone frustum, but haven't implemented that yet. I wanted the standard implementation to run first. It's been running for several hours now and still going, so your last commit might have solved something (that, or not using a second monitor, I cannot think of any other change and seems unlikely...probably your commit ;) ).
The original plan is to have the global decay very short (0.3 seconds or so) and then the deadzone frustum freezes decay. However I'm still having an internal philosophical debate about the desirability of a third intermediate frustum, as having some ability to remember that, for example, an isle is blocked with junk, is somewhat appealing. I still firmly believe in stopping decay completely inside the deadzone, as I can picture very vividly someone leaving a pallet or some other low obstacle near the robot's charging station, then the robot goes and charges for 3 or 4 hours and I don't want it to forget, otherwise when it finishes charging it would drive off and potentially collide. Yes, there's also the possibility of the obstacle being moved during that period, but at at least we've reduced the risk significantly, as it would now take two antagonizing actions instead of one. I want so see, however, if I set the global decay to, say, 6 hours, and then the vlp frustum clears immediately-ish. That would account for over a full charging period... Maybe, we'll see. So far, looking awesome. CPU also not being murdered now, which is nice.
Problem with a 6 hour global decay is that if someone walks by the robot while it's navigating, they'll get frozen in carbonite for a long time... Seems like [30 minutes for global decay, immediate clearing inside frustum and no decay in deadzone ] or something like that would work. Now that this seems to work, I have a lot of testing to do. Sorry for the brain dump, and thanks for your help!
Hey Steve, do you have any pointers for the rotation side the VLP frustum? I'm having no luck getting the openVDB point to get correctly expressed in the VLP coordinate frame, maybe I'm making some wrong assumptions. The translation side works perfectly, but the rotation is completely out of whack in my current implementation, to the point that the clearing frustum will rotate in the opposite direction when rotating the robot. Latest attempt looks like:
/*****************************************************************************/
bool ThreeDimensionalLidarFrustum::IsInside(const openvdb::Vec3d &pt)
/*****************************************************************************/
{
Eigen::Vector3d point_in_global_frame(pt[0], pt[1], pt[2]);
Eigen::Affine3d T = Eigen::Affine3d::Identity();
_orientation.normalize();
T.rotate(_orientation);
Eigen::Vector3d point_in_vlp_frame = T.rotation() * ( point_in_global_frame - _position);
//... then the actual check if is inside the frustum come. I also tried this from, which seems more
// compact but has the exact same outcome:
//
// Eigen::Vector3d point_in_vlp_frame = _orientation * ( point_in_global_frame - _position);
Hi there. First of all, must say how exciting this development is, great stuff!!
I was wondering if the current implementation allows for negative acceleratin areas, i.e., just like the frustum will accelerate clearing to what's inside it, it'd be useful to have the ability to define a shape where, what's inside gets cleared at a different rate, potentially slower than the global setting. This would be greatly helpful for robots with blind spots that could otherwise lead to collision.
Cheers!