lgsvl / simulator

A ROS/ROS2 Multi-robot Simulator for Autonomous Vehicles
Other
2.29k stars 780 forks source link

Need help to write sensor script for a custom smart camera #831

Closed ashwinsarvesh closed 4 years ago

ashwinsarvesh commented 4 years ago

I am using an ADAS camera that has Lane departure warning, traffic sign recognition, forward collision warning. I should integrate this smart camera to the LGSVL so that the input data from the simulator is fed to the camera and it uses the above features on the incoming data. I do this to evaluate camera performance. So, I do not need to communicate with Apollo.

I need help to write the methods OnBridgeSetup and OnVisualize considering the above requirements. I looked at the other sensor scripts but did not find a way to write these methods. Kindly help!!! Thank you

rongguodong commented 4 years ago

@ashwinsarvesh I am glad to provide more information: OnBridgeSetup setup the bridge and publisher method to send out the messages that the sensor generated. The bridge can be ROS bridge or Cyber bridge or custom bridge. You can refer to existing sensors to see how they implement OnBridgeSetup function. OnVisualize is a helper function to visualize the sensor information inside the simulator. It has nothing to do with the AV stack, just a debug tool. Depending on the nature of the sensor, its message can be visualized in the main window (e.g. 3D bounding box, Lidar, Radar) or in a pop out window (e.g. all camera sensors and abstract sensors like GPU/IMU/Control/etc.)

If you have more questions, please feel free to ask here.

EricBoiseLGSVL commented 4 years ago

@ashwinsarvesh Just make the camera OnBridgeSetup and OnVisualize empty. If no bridge is needed, no connection is needed. OnVisualize is only for UX within simulator.

ashwinsarvesh commented 4 years ago

Thank you for your inputs @rongguodong @EricBoiseLGSVL Now I know how to add the sensor plugin How do I physically connect the camera to the simulator? Is there any standard way, like USB?

rongguodong commented 4 years ago

Why do you want to physically connect the camera to the simulator? All "camera sensors" in our simulator are virtual. The simulator does not take any input from any physical sensors.

ashwinsarvesh commented 4 years ago

oh, I did not know that @rongguodong So, is it enough if I just add the sensor plugin for my camera to work on the simulated vehicle on the screen? How does this concept work? Can you explain?

rongguodong commented 4 years ago

You can check the current ColorCameraSensor, DepthCameraSensor, SegmentationCameraSensor to understand how they work.

The basic concept is to render the scene and to readback the GPU content as the message sent out.

lemketron commented 4 years ago

It might help to understand what your actual goal is -- are you wanting to generate data from simulation similar to what the real camera would generate so you can then act on it (or test it) with some other software?

If I understand correctly, you could implement in your custom sensor code the ability to detect Lane departure, traffic sign recognition, and forward collision warning -- because in the simulator you have access to all of the objects that the camera can see -- the ground truth. So if you know the messages that your physical camera would output over USB, you can generate those same messages from your "smart ADAS camera" sensor. You don't have to analyze the image, you just look at the objects in scene, position of the vehicle in the lane, etc. and can generate these high level messages. Look at the ground truth sensor and radar sensor to understand how to detect objects.

ashwinsarvesh commented 4 years ago

I want to add my Smart ADAS camera as Plugin and place it in front of the simulator screen. It has to work on the simulated vehicle just like how it will work when it is fitted in an real car. I mean it should be able to apply the lane departure feature, traffic sign recognition and forward collision on the simulated ego vehicle

I am not clear that how does this happen if I just add the camera as a plug in to the simulator without physically connecting It?

I am a beginner,so sorry if the question sounds silly.

lemketron commented 4 years ago

If you're not trying to simulate that Smart ADAS camera but rather just aiming it at the simulator screen, then there's no reason to write a custom sensor plug-in for the simulator. You just need to use the PythonAPI to script the simulator to generate whatever imagery you wish to show on the screen, and then it's up to the camera to process what it sees.

ashwinsarvesh commented 4 years ago

@lemketron ok yes. But in this case, how do I connect the camera to the simulator? My camera has the ability to process the images for Lane departure warning, traffic sign recognition, forward collision warning. It just has to see the simulated vehicle and work kindly help

rongguodong commented 4 years ago

So, what you need is not a camera sensor plugin, but maybe a bridge plugin or nothing, depending on how you use it.

I am not sure how this smart camera works. Does it just take normal RGB images? In other words, is the optical part or other physical parts same as normal camera, just relying on algorithm in its driver to process the camera image? If so, you may simply point the camera to the screen, as if it is mounted on the ego car. Or you may build a bridge plugin, or something similar to bridge, depending on how the raw data is sent to its driver, and send images from ColorCameraSensor to its driver.

ashwinsarvesh commented 4 years ago

@rongguodong I will get back to you regarding how it works
Thank you

lemketron commented 4 years ago

@lemketron ok yes. But in this case, how do I connect the camera to the simulator? My camera has the ability to process the images for Lane departure warning, traffic sign recognition, forward collision warning. It just has to see the simulated vehicle and work kindly help

I’m not understanding why you need to connect the camera to the simulator at all. The simulator will generate the image in front of the simulated ego vehicle and your camera will “see” that rendered on your display.

rongguodong commented 4 years ago

@lemketron ok yes. But in this case, how do I connect the camera to the simulator? My camera has the ability to process the images for Lane departure warning, traffic sign recognition, forward collision warning. It just has to see the simulated vehicle and work kindly help

I’m not understanding why you need to connect the camera to the simulator at all. The simulator will generate the image in front of the simulated ego vehicle and your camera will “see” that rendered on your display.

Yes, you do not need to connect the physical camera to the simulator. Depending on how you use it, you may:

  1. Simply point the physical camera to the screen; or
  2. Connect the simulator to the machine running the driver of this smart camera, and send data generated by the simulator to its driver. (In this way, you do not need the physical device at all. You only need its driver.)
ashwinsarvesh commented 4 years ago

Yes, I do not need to physically connect the camera to the simulator. Thank you for the information :) I place the camera in front of the simulator and it captures the images from the simulated screen. Next, I want this captured camera data to be sent to Apollo. Is it possible?

lemketron commented 4 years ago

Yes, I do not need to physically connect the camera to the simulator. Thank you for the information :) I place the camera in front of the simulator and it captures the images from the simulated screen. Next, I want this captured camera data to be sent to Apollo. Is it possible?

That is a question for someone who works with Apollo. The Apollo perception module expects to read input from a camera topic ("/apollo/sensor/camera/..."). I'm not sure how Apollo would need to change to support your camera. I would suggest looking through the Apollo documentation to see if it helps you understand what you need to do -- but clearly those changes are with Apollo code and not with the simulator. You could also try posting an issue to the Apollo github issues page.

ashwinsarvesh commented 4 years ago

Thank you for the guidance