MissouriMRDT / Autonomy_Simulator_Python

Code used to simulate the Rover behavior, used to test the MRDT autonomy stack.
https://marsrover.mst.edu/
2 stars 0 forks source link

Add support for basic LIDAR point clouds #5

Open EliVerbrugge opened 3 years ago

EliVerbrugge commented 3 years ago

Last core thing necessary for vision system parity is sending lidar point clouds to Autonomy from the simulator.

ClayJay3 commented 2 years ago

@EliVerbrugge @declan34 This might have been a less than perfect way to do this, but I think it works pretty well. The sim takes a performance hit with the new lidar element (nothing I can do about that), it's still plenty usable with a high-end computer. I was able to run it on my laptop. I also added some elements to the world while I was testing obstacle detection, those can be removed if needed.

Also, it's important to note that if you're running this version of the simulator, you must also have the updated version of the sim_cam_handler.py and zed_handler.py file in the autonomy code. As of now, the only way to get the updated version is from the Autonomy_Software/feature/obstacle_ignorance branch. In the future, when that branch is merged, everything should just work.

ClayJay3 commented 2 years ago

I would also like to admit I did some tricky stuff to send the point cloud data over the network :). I couldn't send the full point cloud over the network or else everything slowed to a crawl. I had to scale the point cloud values to 0-255 convert the cloud to a 4-channel image, and then send it, unpack it, and scale it again on the other side. The point cloud did lose a little resolution, but it didn't seem to affect anything. Maybe there's a better way to do this, but will it be as fast?