lgsvl / simulator

A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Other
2.26k stars 778 forks source link

[Autoware] Enabling object detection/avoidance and traffic light recognition #135

Closed yanbec closed 4 years ago

yanbec commented 5 years ago

Hi There! At first, thank you very much for doing this project.

I have the simulator up and running with Apollo but want to do a comparison with Autoware in the exact same scenarios, which seems to be possible with this simulator. Sadly, I can't really get Autoware working as I would like it to, especially object detection and traffic light recognition.

I tried everything so far with your Autoware fork based on 1.07, with Autoware 1.11 and their current master branch (1.11+master seem to offer at least a button for the simulator, so I figured there might be some kind of support for it). Currently I use the latest release version of the simulator on Linux.

What did I do? I cloned Autoware and autoware-data and built everything as described in the repo. I can start succesfully, launch the *.launch files, rviz and plan a route. With the current vector map rviz sometimes gets the destination wrong, but most of the time it works. For easier testing I switched back to commit e3cfe709e4af32ad2ea8ea4de85579b9916fe516 which loads a lot faster.

How do I properly enable object detection/avoidance? I modified the detection launch file to load yolov3. It works (nvidia-smi shows the process) after putting the weights in an accessible folder und defining the location in the launch file - I can see it working in rviz by choosing the correct image and object rect topics in the ImageViewerPlugin, but the car doesn't really seem interested in the information and keeps crashing into other cars if traffic is enabled.

Same question for traffic light recognition: how can I activate this? At the moment, the car is just ignoring everything around it. LiDAR object detection would be nice too, but as of now my focus is on vision based object avoidance.

I'm quite new to ROS (and Autoware/Apollo) and try to get through any kind of documentation I can get my hands on. I tried to understand the flow (and possible missing piece) of information by looking through the launch files and RQT but couldn't really find a solution to my problems.

If I missed any documentation that answers this, please guide me there - otherwise I would really appreciate a little step-by-step guide to drive through traffic safely, the poor car already doesn't look that new anymore.

If you need any further information I will happily provide anything I can.

Best regards, Yannick

Current config: lgsvl/autoware-data master@e3cfe709e4af32ad2ea8ea4de85579b9916fe516 (smaller map) lgsvl/autoware branch lgsvl_develop@4fdb2f8d1aa87ca28f1f4f6a47bf2bf27951a7d5 (current) simulator 2019.03 Ubuntu 18.04

martins-mozeiko commented 5 years ago

We were also not able to get object detection/avoidance working in Autoware.

I think you should ask Autoware developers on how you are supposed to get it working. Maybe @hakuturu583 can help here.

yanbec commented 5 years ago

Thanks for your answer! I wrote a post on the Autoware Discourse - let's hope we get this running for all of us :smile:

shan-as commented 5 years ago

@yanbec @martins-mozeiko

Autoware's ring_ground_filter node requires proper header/data for intensity and ring: https://github.com/ros-drivers/velodyne/blob/664aef9802301e93d39fed7af59eec60169c86f4/velodyne_pointcloud/include/velodyne_pointcloud/point_types.h

It seems like they're missing: Failed to find match for field 'intensity' Failed to find match for field 'ring'

This results in empty lidar point data from points_no_ground topic which makes velocity_set node to think there's no object detected anywhere.

Is this an easy fix?

martins-mozeiko commented 5 years ago

This should a be an easy fix. We are preparing point cloud date here: https://github.com/lgsvl/simulator/blob/master/Assets/Scripts/LidarSensor/LidarSensor.cs#L569 It already has intensity field. Maybe we need to change data type from "unsigned byte" to "float" ? Only new thing that needs to be added is ring.

Preparing data happens in a loop above it - lines 539 to 555. "ring" value is i % CurrentMeasurementsPerRotation

shan-as commented 5 years ago

@martins-mozeiko Thank you for your prompt reply! When will this fix be released? Or how soon can I test this in my environment?

shan-as commented 5 years ago

@yanbec @martins-mozeiko Found a workaround: Use ray_ground_filter instead of ring_ground_filter ray_ground_filter does not require either of the headers I mentioned above so no immediate need to change the lidar simulation data

shan-as commented 5 years ago

Hi @martins-mozeiko,

I'm having an issue while trying to get traffic light recognition working. The feat_proj node is projecting roi_signal boxes in the wrong areas. See the image (black boxes are drawn from roi_signal) 258

I'm suspecting lidar to camera extrinsics are off. Is there a way to verify the tf? Is there a way to redo calibration?

shan-as commented 5 years ago

Also, has this simulation vector map for Autoware been tested for stop signs and traffic lights? https://www.youtube.com/watch?v=NgW1P75wiuA 6:33 - 6:38 looks like the vehicle just ignores the stop sign

SrinivasRavi commented 5 years ago

Hi @shan-as , in the video you mentioned, 6:33-6:38 looks like part of Apollo and not Autoware. Apollo demo starts at 4:54 and Autoware starts at 4:10

shan-as commented 5 years ago

@SrinivasRavi I think you're right. After rewatching it, it shows Apollo's dreamviewer of the same route (left turn without stopping at the intersection). Either way, I don't see any interaction with traffic signs in Autoware's vector map.

martins-mozeiko commented 5 years ago

Yeah, since we could not run Autoware perception before, we don't really know how well it works with stop signs and traffic lights. It may require enabling extra parameters... We don't really know internals of Autoware.

yanbec commented 5 years ago

Hi! I sincerely apologize the delay on answering again - I kind of fixated on the ROS discourse and didn't look over my GitHub notifications. Thanks so much for taking interest in the issue. I will go over your posts now and see if there's anything I can do about this - but given my state of experience in Autoware, ROS and the simulator, I don't think I can contribute much. I will check back more often, that's for sure :+1:

cyberphysicalrobotics commented 5 years ago

As for traffic light and stop sign on Autoware, David pushed the commit to modify calibration. https://github.com/lgsvl/autoware-data/commit/3a51cd48a1726f681605248f993bf27282a24b8e

After applying this commit, it works like the following screenshot. image

yanbec commented 5 years ago

The calibration fix clearly made things a lot better with TLR! Although there are some lights missing or the ROI is off. Still, the car doesn't seem interested in the information that is available. This goes for both TLR and object detection via SSD or YOLO.

I uploaded a Video (directly in the beginning) that shows the car running over a red light that was clearly detected as red before passing the stop line. Is there just something wrong in my setup? How do I pass the information on and put it to use?

Here are videos showing the behavior with SSD(crash at about 2:25) and YoloV3.

shan-as commented 5 years ago

@yanbec Where did you get the models/weights for traffic light detection and object classifier? (This is not related to the issue. I'm just curious)

yanbec commented 5 years ago

@shan-as I took the YoloV3 config&weights from the homepage of the original author of the Yolo papers. They work quite well for everything I have tested! Then I modified the detection.launch-file to use Yolo/Darknet instead of SSD. I recommend reading the Yolo papers too, especially the one of version 3, as it's probably the most fun technical paper I've read so far.

As for SSD and the TLR: if I remember correctly, I did nothing to make that work besides following the instructions in this and the lgsvl/autoware repository. Correct me if I'm wrong and I will check where I got it.

//edit: the weights are for sure great for testing in a lot of use cases. For productive use it should be possible to exclude many of the currently existing object classes and there maybe make either the network smaller and therefore faster or trade speed for more accuracy. But I guess that's a question for another time/place :-)

yanbec commented 5 years ago

Hi! Are there any news regarding this issue? I still haven't figured out how to make use of the information I obtain by using TLR and camera based object detection. As for the LiDAR object detection: @martins-mozeiko Changing the data type of the intensity field gets rid of the warning/error about it. So that's a start I think - but I don't know how to implement the ring value. @shan-as How did you change to ray_ground_filter? Can you detect objects and avoid collisions?