Open sbanga16k opened 5 years ago
It took us a long while, but we eventually got this to work.
However, we don't have much of a visualizer, we did manage to pipe the data output from the lidar recognition component into a message queue, then have a separate ROS node receive those messages and publish in Rviz... but it was quite the hassle and we are exploring alternatives.
There appears to be a lidar offline visualizer here: apollo/modules/perception/lidar/tools/
But it wasn't working initially, I forget why, but we plan to come back and try it again.
@snuffysasa - I appreciate your quick response. Could you elaborate on how you ran the pipeline for the Lidar obstacle perception offline to get the data output?
Wait, I wasn't clear. I have not gotten this file/module to work correctly: apollo/modules/perception/lidar/tools/offline_lidar_obstacle_perception.cc
But I have managed to run the regular perception module, and disabling all components, except for lidar segmentation component and lidar recognition component...
Then I can use cyber_recorder play to play some recorded Lidar data, and the running perception module will process it. and there will be data output on "/perception/inner/PrefusedObjects"
Is this what you are trying to do?
@snuffysasa - Apologies for my naivety but which is the file that needs to be modified & how do I disable the other components? Or does it have to be done in Dreamview?
@sbanga16k
So you will want to launch perception from
/modules/perception/production/perception.launch, you will want to examine that file which will point to a DAG file/files.
Then open those DAG files and you will see a list of components, you can remove what you don't need.
the dag will point to several other config files, you will want to examine each of these, and child config files that those point to.
After that, there may have been a few other things that still caused us issues before we were able to run the Lidar module.
Oh make sure to disable HD map roi if you don't have an hd map.
It's very confusing at this point, hopefully there will be more documentation soon. If you get this far, and are still getting some errors. you can post back, and maybe I'll remember how we fixed those errors.
@snuffysasa - I was trying to run the perception.launch files first without changing any of the components in the referenced DAG files but ran into these errors in sensor_manager.cc (see attached)
sensor_manager.cc:78 [mainboard] Failed to load camera intrinsic. sensor_manager.cc:92 [mainboard] Failed to add sensor_info: front_6mm sensor_manager.cc: 36 Check failed: this->Init() == true (0 vs. 1)
I was trying to debug this but realized I don't have access to the folder: apollo/modules/perception/data which is where the camera intrinsics are stored.
Did you encounter this error? If so, how did you go about resolving this?
@sbanga16k The testdata
directory under modules/perception
has those files that SensorManager is looking for.
You should be able to copy the contents of modules/perception/testdata/camera/lib/traffic_light/preprocessor/data/multi_projection/
to modules/perception/data/
and resolve those errors.
I have turned off the LiDAR component, since it's easier for me to work on the Obstacle Detection component.
Just today, I was able to hookup a cheapo Logitech USB camera and get the usb_cam.cc
driver to pull images from it and send it to FusionCameraDetectionComponent
's Message Reader in OnReceiveImage
.
But I have come across new errors in this now, so it's WIP. But I am happy to share those changes in a git branch with you if needed.
@aboarya I copied the contents of modules/perception/testdata/camera/lib/traffic_light/preprocessor/data/multi_projection/
to modules/perception/data/
like you said (see attached). But that didn't do anything to resolve the errors.
I even tried removing the "perception_camera" module from the perception.launch
to see if it would work without it but it gave me the same error. I was trying to print out the attributes of the sensors for debugging in the sensor_manager.cc
file but that did not print out anything. I'm at a loss in terms of understanding which function is invoking calls to retrieve the camera intrinsics parameters.
@sbanga16k My apologies, I wrote my answer last night from memory. Here is a detailed explanation.
Looking at your original screen shot where the error is:
Failed to load camera intrinsics.
That error is coming from SensorManager.cc line 78.
That exception is caused because
if (!LoadBrownCameraIntrinsic(IntrinsicPath(sensor_info.frame_id),
distort_model.get())) {
is returning false.
The IntrinsicPath
line 69 function in SensorManager.h assembles together the path of the yaml file as follows
FLAGS_obs_sensor_intrinsic_path + "/" + frame_id + "_intrinsics.yaml";
The obs_sensor_intrinsic_path
is a variable in the perception_commong.flag file and is defined as follows
--obs_sensor_intrinsic_path=/apollo/modules/perception/data/params
In summary, there are three parts to the path of the file:
frame_id
is equal to camera_name defined in the fusion_camera_detection_component.pb.txt and is
2.1. front_6mm
2.2. front_12mmPut together you get:
1. /apollo/modules/perception/data/params/front_6mm_intrinsics.yaml
2. /apollo/modules/perception/data/params/front_12mm_intrinsics.yaml
But wait
You're done yet! Because those the LoadBrownCameraIntrinsic
function in io_util.cc line 58 parses those yaml files AND WILL throw a YAML parse exception because it tried to cast the width and height defined below from integer to double
height: 1080
width: 1920
For some very odd reason, the cast does not work, so I had to change the values to
height: 1080.0
width: 1920.0
So my suggestion is copy the files from this directory to /apollo/modules/perception/production/data/perception/camera/params/
and change the values to doubles to avoid the parse error and let me know how that works. I don't recall the next error I had to solve after that, so if you do come across another, please update here.
@natashadsouza @xiaoxq : If you guys offer some advice on how to turn this info into documentation, I'll be happy to write it up and create a PR for it.
Looking at your second screenshot, the one that lists the files in the params
directory, the front_6mm_intrinsics.yaml
is missing, just as it was for me. I was able to find it under the production/data directory as I linked above. https://github.com/ApolloAuto/apollo/tree/master/modules/perception/production/data/perception/camera/params .
@aboarya apologies for the delay in responding. You could use any document from the "how to" section (example) as a reference to create the document, we will review it once you have made a PR. Let me know if you have any questions!
System information
I wanted to run the LIDAR perception module offline (similar to how we test end-to-end deep learning models) to benchmark its performance using metrics for object detection on the demo data from the Apollo Open Data Platform. Can somebody tell me how to go about doing that? Navigating through the scripts is pretty confusing & there is no mention of how to test this in the documentation. A prompt response will be highly appreciated.
Thanks in advance!