Open RussTedrake opened 6 years ago
Pointcloud geometry in Anzu is being handled by raycasting pointclounds into a dense voxel grid, which is converted to a signed distance field for collision and gradient queries. Base class, OpenMP implementation. A CUDA implementation is in progress, OpenCL should be similar and would not be limited to NVidia platforms.
Marking this one as "I just looked it up again to make sure we're still tracking it". It feels like an important missing feature for completeness. I am not asking that we increase the priority on it now. I'm just guarding against "closed due to inactivity".
@SeanCurtis-TRI -- how would you feel about if we added a QueryObject<T>::HasCollisionsWith(PointCloud )
or similar? Of course it would need to be properly piped through the geometry state, proximity engine, lions, tigers, and bears. I think it could be immensely useful, even if it just made lots of queries using fcl::Sphered
. We could test the fcl codepaths for "point cloud as a degraded mesh" as a later optimization?
I have some concerns with such an implementation. I think the behavior of signed distances and gradients would either be unintuitive/borderline useless, or we'd be have to make strong assumptions about the nature and structure of the point cloud(s).
A number of concerns:
When using a point cloud for collisions, about 99.9% of the time, you don't want collisions with the point cloud, you want collisions with an object part of the surface of which is captured by the point cloud.
Equally often, you want collisions against objects covered by multiple point clouds captured by cameras with different poses, and the collision queries should be consistent from areas covered by one cloud to another.
Getting useful and correct collision checking behavior requires more than just "give FCL a point cloud" and doesn't necessarily have a single correct and performant answer for many/all uses.
In comparison, we have been quite successful using point cloud geometry in Anzu, via the point cloud(s)->voxels->voxel collision checker route, and following the upstreaming of model directives that code can now be made public. This approach separates the point cloud(s)->geometry and geometry->collision check concerns.
@calderpg-tri -- I should have said that I am completely in agreement with you that using voxels / octrees for this is far better in most cases. I would hope we could have HasCollisionWith(Octree)
etc too. I was simply offering this as a first low barrier to entry.
Re: your concerns about signed distances and gradients. HasCollisionWith
returns a boolean. That's why I suggested it.
I'm afraid I don't completely groc your concerns as stated. But I think you're just building up the voxel/octree argument? I'm in complete agreement on that.
Would love to have the additional tools open. Let's chat about it soon.
Our experience in anzu has been that features supported by only one type of collision query are basically useless - they can't be used in any behavior that requires multiple collision query types (for example, the simplest pick and place that uses grasp search, collision-aware IK, and a sampling-based planner requires both binary and gradient checks).
While the basic binary collision check does offer some opportunity for performance optimization over other queries, it's also the least flexible check. It's definitely possible to implement complex manipulation behaviors with only binary checks, but that throws away all of the unique strengths of drake's optimization tools.
Re: the separation of point cloud(s), geometry, and collision checking - collision checking against just the points of a point cloud itself tends to risk finding non-physical "collision free" configurations, as well as not providing any useful gradients. Doing better than that requires making assumptions about the point cloud (i.e. points are always in sensor frame, and the sensor doesn't produce multiple returns per ray). Separating point cloud(s)->geometry and geometry->collision check means that those assumptions/optimizations don't become part of the collision checking process.
From a recent conversation in slack about visualizing point clouds in drake_visualizer
-- it seemed relevant to this topic:
https://drakedevelopers.slack.com/archives/C43KX47A9/p1605201589030200
Example use case: Robot arm moving in an environment containing some known/identified geometry, but also some unmodelled geometry. A standard approach would be to e.g. add all of the point cloud points (typically after subtracting out those corresponding to known geometry) to e.g. an octomap and then allowing collision-queries against this map when performing collision-free motion planning.
Straw-man API: We provide a method to register a point cloud input port to the SceneGraph. At run-time, point clouds are passed in through that port. SceneGraph's implementation then collates the data internally in e.g. an octomap that is registered with FCL, and permits the standard queries.
Note: Octomap also has logic to take in depth camera returns and update the occupancy probabilities via both positive and negative returns (e.g. if a raycast from the camera to the points passes through a voxel, it's probability of being occupied is decreased). So it might be that we'll want camera pose + depth returns input port instead of (or in addition to) one that accepts a point cloud representation.