Open jianghaijun007 opened 1 year ago
Because we were interested in creating colored point clouds, we chose projecting image data onto the point cloud. I thinks a visualization with the opposite projection easily be realized using, for example, rviz.
Because the image-point-cloud alignment algorithm works on intensity data, we convert images into mono8. Meanwhile, I think some minor modification makes it possible to show colored images.
The result of ouster looks corrupted. Did you enable dynamic point cloud integration?
The environment itself contains rich geometrical features and looks good, but make sure that there is no dynamic objects.
The accumulated point cloud is too sparse to extract features. I recommend take a longer data with more movement to generate a dense point cloud.
Hello, thank you very much for your incredible work! After using your tool to calibrate the data you provided and the data we collected ourselves, I have several questions:
blend_weight
, but this is not obvious. In many cases, I do not know the quality of the calibration results. Why not project the point cloud onto the image?blend_weight
, we can see that the overlap between the calibrated image projection and the point cloud is very good. This is the result of using the official provided ouster_ros1 data and using the automatic matching method. I can't tell the quality of the calibration results by adjustingblend_weight
. Is my result correct?