OpenSimulationInterface / open-simulation-interface

A generic interface for the environmental perception of automated driving functions in virtual scenarios.
Other
267 stars 125 forks source link

Virtual Detection Area in SensorData #733

Closed jruebsam closed 4 months ago

jruebsam commented 1 year ago

Virtual Detection Area

Currently it is not possible from a SensorData message to get an understanding what the current area of observation of a sensor model is.

I would like to have some kind of message, lets say a Virtual Detection Area which describes in which kind of FOV the current sensor models operates. This could be a simble generic message which contains a repeated list of points, e.g.

Solution

message VirtualDetectionArea
{
    // List of points of the boundary of the detection area sorted counter-clockwise 
   // relative to sensor mounting position, projected onto ground surface
    repeated Vector2d boundary_point = 1;
}

Since this message is related to the output of a sensor model I would like to add this to the SensorData message. Also this would be important for our current usecase, which is focused on visualization of SensorData messages.

jruebsam commented 11 months ago

@pmai @ThomasNaderBMW @PhRosenberger any suggestions how to proceed with this or if this should be discussed in a Subgroup?

PhRosenberger commented 11 months ago

@jruebsam if I understand you correctly, you would like to know what the simulated sensor could have seen (a.k.a. nominal FoV) in comparison to the actual output data? I wonder why it is restricted to a ground projection and not 3D or even 4D (incl. velocity or intensity). However, do you know our concept of a "Unified relevance region from sensor model knowledge" from this paper: https://tuprints.ulb.tu-darmstadt.de/18950/ ? - This seems related to your thoughts, right?

Are you attending the GSVF next week? - Would be great to discuss it there in person:-)

jruebsam commented 11 months ago

@PhRosenberger THanks for the reply, yes it would be something like a nominal FOV. In our usecase we want to use this in a visualizer thats why its only 2D, since 3D FOVs can become quiet confusing. Unfornutanely I will not be able to join GSVF, but also thanks for the paper.

PhRosenberger commented 11 months ago

@jruebsam Oh, what a pity that we cannot meet this week!

However, for your use case, I see the existing GenericSensorView as an option: https://opensimulationinterface.github.io/osi-antora-generator/asamosi/latest/gen/structosi3_1_1GenericSensorView.html

What do you think?

jruebsam commented 11 months ago

@PhRosenberger I thought about that, however GenericSensorView only contains a FOV and a Mounting Position but not a Range. It could be an option that we do some kind of extension at this location.

PhRosenberger commented 11 months ago

Yes, this could be a good way to solve this issue. Would you mind atrting a PR for this with the extension on GenericSensorViewConfiguration?

thomassedlmayer commented 7 months ago

@PhRosenberger I'm not against adding a range field to GenericSensorViewConfiguration in general as I feel like it would be a consistent addition to the FoV fields. I don't know the reasons why it wasn't initially added though. But I think, if it was there, it then should only be used for the configuration of the input which is quite different from an intended sensor model output range. E.g. requiring/providing only a specific excerpt of ground truth information to a sensor model is different from describing its intended output range.

Adding something like a VirtualDetectionArea to SensorData would IMO correctly indicate that it actually refers to the sensor model output.

But I also agree that we should at least offer 3D boundary points.

PhRosenberger commented 7 months ago

Ok I think then we have two different cases:

  1. The maximum (and theoretical) FoV of the sensor at all, which I see in the SensorViewConfig. Here we should add the range, I guess, to cover it.
  2. The actual FoV at each time step, where the sensor is able to detect something, which could be addressed by the VirtualDetectionArea.

So we need to different PRs to solve this issue here, right?

pmai commented 4 months ago

Adressed by #781