Hi, I'm also a fan of apollo, so glad that you could provide the perception part of ros version, but I have some problems:
When we run the demo-2.0.bag provided by your link, no images showed in rviz and said "unsupported endcoding yuyv", is there something wrong with the video image format? or do we need to transform the image format first?(How to do that btw)
To show the cluster results of the laser point clouds, it needs to combine the image data from the camera to function normally (so called fusion), or it could work both seperately(camera peception and lidar perception)?
When I delete build and devel folders and rebuild, still some errors occurred in building, errors in files util.cu and util.h, and 1 package build failed, is that normal?
Hi, I'm also a fan of apollo, so glad that you could provide the perception part of ros version, but I have some problems:
When we run the demo-2.0.bag provided by your link, no images showed in rviz and said "unsupported endcoding yuyv", is there something wrong with the video image format? or do we need to transform the image format first?(How to do that btw)
To show the cluster results of the laser point clouds, it needs to combine the image data from the camera to function normally (so called fusion), or it could work both seperately(camera peception and lidar perception)?
When I delete build and devel folders and rebuild, still some errors occurred in building, errors in files util.cu and util.h, and 1 package build failed, is that normal?
Looking forward for your reply, thx
我是国人,可以中文交流,谢啦