Open ngmor opened 1 year ago
SDK can be run on-board each Nano. It works to grab camera feed. However, it must be run locally, and you have to close other processes (MQTT, ROS nodes, etc) in order to use it.
Rectification helps remove the distortion.
The SDK supposedly has a getter/putter functionality that allows sending camera data over UDP. I tried to get this to work and was unsuccessful.
Biggest question here is how to handle the camera data? What probably makes sense is local Nano processing of the camera data, but there has to be some way to communicate results of the processing out to ROS nodes that will be controlling the Go1.
Options:
Can we get direct camera feeds programmatically, for processing with OpenCV or other libraries? Fisheye view - is there some way to process this so it's not distorted? Aligned depth? Can this be used for SLAM?