Closed sgzzgit closed 8 years ago
We use at least two cameras. One dedicated for traffic light recognition and the other used for generic detection. 2016/05/12 10:18 "sgzzgit" notifications@github.com:
Thanks in advance. In the autoware's video , It could detect the Pedestrian and traffic signal. Could you tell me how many cameras does autoware use and whta is the type of the camera? Thanks again.
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/CPFL/Autoware/issues/298
what is the interface of the cameras? USB? Ethernet?
We don't select cameras. Up to your preference. You just need a driver. By default we support drivers for PointGrey Glasshopper/LadyBug, Baumer, and generic USB cameras such as webcams. 2016/05/12 10:30 "sgzzgit" notifications@github.com:
what is the interface of the cameras? USB? Ethernet?
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/CPFL/Autoware/issues/298#issuecomment-218636465
Thanks very much.
How are the imaging algorithms integrated into Mission and motion planning? Is the motion planning using the pedestrian detection or traffic light detection at all?
How does motion planning know that it needs to avoid collision with objects or stop at traffic light?
I have the following graph for enabling motion planning:
[cid:image001.jpg@01D1ABB6.832A4C80]
From: Shinpei Kato [mailto:notifications@github.com] Sent: Wednesday, May 11, 2016 6:34 PM To: CPFL/Autoware Autoware@noreply.github.com Subject: Re: [CPFL/Autoware] which type of camera does autoware use? (#298)
We don't select cameras. Up to your preference. You just need a driver. By default we support drivers for PointGrey Glasshopper/LadyBug, Baumer, and generic USB cameras such as webcams. 2016/05/12 10:30 "sgzzgit" notifications@github.com<mailto:notifications@github.com>:
what is the interface of the cameras? USB? Ethernet?
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/CPFL/Autoware/issues/298#issuecomment-218636465
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHubhttps://github.com/CPFL/Autoware/issues/298#issuecomment-218636930
I could not see you graph ,could you add a link to your jpg file?
Please find attached graph. I would like to know how to add object recognition and traffic sign/ light detection into mission/motion planning flow.
Regards
Nandini
From: sgzzgit [mailto:notifications@github.com] Sent: Wednesday, May 11, 2016 7:01 PM To: CPFL/Autoware Autoware@noreply.github.com Cc: Sarkar, Nandini nandini.sarkar@intel.com; Comment comment@noreply.github.com Subject: Re: [CPFL/Autoware] which type of camera does autoware use? (#298)
I could not see you graph ,could you add a link to your jpg file?
— You are receiving this because you commented. Reply to this email directly or view it on GitHubhttps://github.com/CPFL/Autoware/issues/298#issuecomment-218640269
Please find attached graph.
NOTE: Email reply ignores attached file. You need to use github Web interface instead of email reply for attaching file in this issue.
@nsakar ,
I would like to know how to add object recognition and traffic sign/ light detection into mission/motion planning flow.
In Autoware, a node named obj_reproj
converts detected objects position on a image into 3D position.
In other words, object's 2D coordinate on a image are transformed into 3D coordinate system by using positional relationship between camera and lidar.
Motion planning will use that 3D object coordinate so avoid collision.
(This integration is in progress and not included in master branch.)
With regard to traffic light recognition, a node in mission planning package subscribes recognition result. It switches base lane that planner tries to follow according to detected traffic light state. The difference between lane for red and green signal is only target velocity contained in waypoints around intersection.
Which branch has the integration of obj_reproj with motion planning is progress?
From: Manato Hirabayashi [mailto:notifications@github.com] Sent: Thursday, May 12, 2016 8:12 AM To: CPFL/Autoware Autoware@noreply.github.com Cc: Sarkar, Nandini nandini.sarkar@intel.com; Mention mention@noreply.github.com Subject: Re: [CPFL/Autoware] which type of camera does autoware use? (#298)
@nsakarhttps://github.com/nsakar ,
I would like to know how to add object recognition and traffic sign/ light detection into mission/motion planning flow.
In Autoware, a node named obj_reproj converts detected objects position on a image into 3D position. In other words, object's 2D coordinate on a image are transformed into 3D coordinate system by using positional relationship between camera and lidar. Motion planning will use that 3D object coordinate so avoid collision. (This integration is in progress and not included in master branch.)
With regard to traffic light recognition, a node in mission planning package subscribes recognition result. It switches base lane that planner tries to follow according to detected traffic light state. The difference between lane for red and green signal is only target velocity contained in waypoints around intersection.
— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHubhttps://github.com/CPFL/Autoware/issues/298#issuecomment-218788257
Thanks in advance. In the autoware's video , It could detect the Pedestrian and traffic signal. Could you tell me how many cameras does autoware use and whta is the type of the camera? Thanks again.