OpenSimulationInterface / open-simulation-interface

A generic interface for the environmental perception of automated driving functions in virtual scenarios.
Other
267 stars 125 forks source link

Additional parameters for sensor models #99

Open hschoen opened 6 years ago

hschoen commented 6 years ago

OSI should completely transport all information that sensor models may require. From the radar point of view I suggest to add the following parameters:

radar_gen Additional parameters for a phenomenological sensor model

radar_phys Additional parameters for a physical sensor model (in OSI low level data)

Some of these parameters are already part of OSI but given for completeness.

For camera and LiDAR (and other sensors such as ultra sonic) parameters have to be completed as well.

martin-fzd commented 6 years ago

For LiDAR we need a kind of beam matrix, describing in which direction are rays (i.e. laser beams) to be shot. Here we define the total number of elevation layer as well as the spacing in azimuth (angular resolution). Maybe doable with a external sensor config which instantiates specific sensors, see #98 ? From here we could also read the Beam divergence and the wavelength of the LiDAR.

We also need an interface allowing for incorporating intrinsic calibration (important for intensity calculations!) as well as a paramters for ray steeing, i.e. rotation rate, immitating steeing of laser beams (e,g, from left to right).

jondo2010 commented 6 years ago

Possibly related question about the interface intentions: How should it be that a model implementing a LiDaR sensor is able to generate it's low-level point cloud with only GroundTruth input? Doesn't it need access to some sort of geometry to perform intersection/ray-cast queries upon?

martin-fzd commented 6 years ago

I agree. Material assignments and geometric properties must be considered for simulation model validation. Not sure whether this will be part of OSI...? It will be even more challenging if don't have a “newly delivered, just washed” approximation for appearance of objects, which also affects camera models. Should we think about a degree of dirtyness, similar as we have found it for rain?

Here is my proposal for now: As a large car model cataloge is available in the tools and the "GroundTruth" Objects could be made parameterizable: Basic materials could be assigned to parts of the objects (let's start with glass, metal, and plastic for simplicity) which allows for including specific material reflection properties for different sensors.

ghost commented 6 years ago

@martin-fzd maybe you can get in touch because I think you are working on such a lidar module? There will be a big low-level data update with the next release btw

PMeyerDSPACE commented 6 years ago

Regarding the Radar Low Level Data, it might be a good idea to include doppler shift information for each ray in addition to the signal strength. Also, for each intersection point, additional velocity data could be included (e.g. velocity of the interation point in x,y,z directions). Maybe additional data, such as the material, normal vector etc. could be specified as well (Might be useful in some cases).

Also, I think its very important to define the coordinate systems properly, e.g. is the "interaction point position x,y,z" in the rx-antenna-, sensor-, car- or world-coordinates? Same goes for azimuth and elevation angles of the beams (e.g., does 0 azimuth mean centered or something else?)

At least, some of the data is also possibly redundant. For example, TX- and RX-Angles of the beams can be calculated from the intersection point positions. (e.g., assuming the intersection points are in the sensor coordinate system, the TX-Angle would be the angle to intersectionPoint[0] and the RX-Angle would be the angle to intersectionPoint[count-1]) This would result in less data transferred, but means additional calculations on the receive side.

hschoen commented 6 years ago

@PMeyerDSPACE : You are right, I missed that point in the table: doppler shift information for each (received) ray must be included. I also agree that it should be possible to add information about velocity, normal vector and material properties for each intersection. However, these are only optional parameters for validation as the sensor does not receive this information in reality. Concerning the coordinate system I propose to use sensor coordinates because the rays emerge from the sensor and so the ray tracer also most likely uses sensor coordinates. Otherwise the ray traces has to convert to world coordinates and the sensor model back to sensor coordinates. Speaking of azimuth and elevation angle, my understanding is the following: The x-axis in the sensor coordinate system is defined by the normal vector of the sensor aperture (azimuth = elevation = 0 degree). The z-axis is orthogonal to the x-axis and minimises at the same time the angle to the zenith direction (if x-direction is in the horizontal plane z-direction is identical to the zenith-direction). Finally the y-axis is orthogonal to the x- an z-axis and all axes form a right-hand system. Rotating x-axis toward y-axis defines direction of positive azimuth angles. Rotating x-axis toward z-axis defines direction of positive elevation angles. Now for the redundant data: First, the coordinates of the intersection points are only optional as this information is not required (and not available for a real sensor) and only need for other purposes such as validation. Second, only the first and last angles to the intersection points are redundant. In general, there are more intersection points for multi-path propagation. So the angles to the intersection points cannot be omitted in total.

PMeyerDSPACE commented 6 years ago

@hschoen :

Concerning the coordinate system I propose to use sensor coordinates because the rays emerge from the sensor and so the ray tracer also most likely uses sensor coordinates. Otherwise the ray traces has to convert to world coordinates and the sensor model back to sensor coordinates.

I agree. One more thing to consider though: Real sensors may use multiple TX and RX-Antennas, which are physically not exactly in the same location. So for example, if the ray tracer launches all rays from the TX-Antenna, the ray launch point might be slightly different than the receive point of the RX-Antenna. Also, the same Ray may be received multiple times with slightly different properties (such as length and angles) for each RX-Antenna.

One solution would be to ignore the physical details of the sensor and always assume it as "point", launching and receiving all rays exactly at the center of the aperture. However, this may lead to errors when trying to generate raw ADC data for individual RX-Antenna channels (This might be necessary to get angle estimation based on phase difference to work).

The solution we currently use at dSPACE is to generate data for each RX-Antenna seperately, and treat all interaction points relative to the RX-Antenna coordinate system. This way, each path gets calculated correcly for all RX-Channels, allowing ADC-Data generation for each RX-Antenna. However, this means additional computation work for the raytracer and also assumes knowledge of the physical properties of te specific sensor under simulation. An alternative might be to just calculate the data for the virtual sensor center point and expand it in a post processing step if necessary.

pmai commented 6 years ago

The commit e622b6e above addresses the undercarriage clearance, which can be contained in ground truth data, since it is not sensor relative. The remainder of the attributes are either sensor-relative (e.g. occlussion) or config-time stuff (e.g. sensor type) and will thus be handled at other places.

mbencik999 commented 6 years ago

How should it be that a model implementing a LiDaR sensor is able to generate it's low-level point cloud with only GroundTruth input?

The ray tracing algorithm needs geometries or more precisely meshes to perform ray tracing, point cloud are tricky because they need to bee meshed before the ray tracing. The thing with material properties is they need to be assigned per triangle, or a material code is assigned and then the properties are gathered from a database or a look up table. But there are even more variables that affect the LiDaR like rain, fog, dust, multireflections from the vegetation and so on....

mbencik999 commented 6 years ago

One more issue with the sensors is the receiver size and position. The reason why that is important to know if the ray has hit the sensor or not. Otherwise the sensor is considered a dot and this dot will never be hit. In turn there will bee no detection. So the question is does OSI give the size of the sensor?

pmai commented 6 years ago

@mbencik999 For simple ray tracing (i.e. rays are traced from the receiver and traced back to the TX point source, with a combination of reflection and diffuse reflection) a sensor size is not really needed, since the TX side is taken as a point source (as is the receiver for that matter). For more advanced stuff 3d geometry might be needed. Please feel free to make suggestions for advanced data that a specific ray tracing approach needs as input from the sensor model...

pmai commented 6 years ago

How should it be that a model implementing a LiDaR sensor is able to generate it's low-level point cloud with only GroundTruth input?

The ray tracing algorithm needs geometries or more precisely meshes to perform ray tracing, point cloud are tricky because they need to bee meshed before the ray tracing. The thing with material properties is they need to be assigned per triangle, or a material code is assigned and then the properties are gathered from a database or a look up table. But there are even more variables that affect the LiDaR like rain, fog, dust, multireflections from the vegetation and so on....

Which is why the current approach would leave the ray tracing itself in the environment simulation, which has access to this information (or can be enhanced to have that access), and only leaves the post-processing of the ray tracing results to the sensor model. It is still possible to setup a sensor model that sets up its own internal ray tracing engine, based on ground truth (instead of sensor view) input, but that will then have to get all of this data in some other form, and will have to identify objects in the ground truth with proper geometries and stuff, which as described is fairly difficult to do, in practice.

mbencik999 commented 6 years ago

@pmai After reading your answer and some thinking i found out that I made a mistake. My comments were in trelation to the 3D ray tracer. As you have sad the 3D ray tracer needs the properties that I have mentioned. And my question was in the direction of the 3D realistic ray tracing. The ray tracer will get as input the GroundTruth, all of the calculations will be done on this data, and that data will have to have some other form.

jdsika commented 4 years ago

@PhRosenberger also something that must be looked at with #367

PhRosenberger commented 4 years ago

@PhRosenberger also something that must be looked at with #367

Yes, should be solved there and then closed.