Open danielv012 opened 1 year ago
I found this online, maybe its helpful: https://www.open3d.org/docs/release/tutorial/geometry/rgbd_image.html
Our 360-degree coverage provided by the new ZED cameras could help with this. The ZED provides both image and depth maps that are already aligned, so presumably projecting the image onto that depth map should be very straightforward. Then aligning the depth map and lidar scans should then assist projecting the image onto the lidar depth.
There is an unused node (image_projection_node) that works toward this... maybe useful?
This node is located in src/perception/segmentation/image_projection_node.py
@dhomiller @saishravanm @MtGuerenS
It is possible some of our tasks are tangled up and too dependent, but here's my thoughts on why I see projection (this task) as a separate task. There are going to be a lot of tasks where we need to go between camera/image data to the 3D world. Examples:
One thought about implementation... because this will end up getting used by a bunch of different work processes, I think it could make sense to implement this projection as a ROS service. This would work similar to how looking up a transformation works (possibly this could somehow get integrated completely into that transform lookup process instead of being its own thing..? Not sure about that. But it definitely doesn't make sense for many parts of the stack to be repeating the same logic. That's the long term vision. What you guys can talk about today is to think about maybe using segmentation and even specifically the driveable surface as the first use case and build it out possibly just for that? Then it could be further generalized for any arbitrary data and developed as a standalone service/node.
Status: Working on creating a node in navigator which can receive camera and LIDAR data. Still in progress, but almost there. Next Steps: Learn about numpy to transform pointclouds to camera images; create a service to do this given any camera / LIDAR source. After that, I'll work with @saishravanm to get their segmentation into the pointcloud. Projected Completion: Hoping to get the service done by October 5. Update: Most of the barriers here are in my knowledge of these, so progress has been a little slow as I learn.
Status: Created a node which projects LIDAR points to the camera image. Created a testing image which confirms that points line up on the image. Next Steps: Classify LIDAR given a segmentation image, then generalize the code to work for any image, and perhaps (?) convert into a service. Projected Completion: Not sure. The main task is done, it's time to start getting the code to work well with other components
10/28/2024 Update:
Status: Almost done with a fully-fledged node. I found a semantic segmentation object in CARLA, so I'm making much more progress, since I can test my node with a sample segmentation image. Next Steps: Parameterize some hard-coded values, publish to a topic, and make a pull request! Projected Completion: Hopefully done with this by November 2.
There is an unused node (image_projection_node) that works toward this... maybe useful?