autowarefoundation / autoware_ai

Apache License 2.0
28 stars 11 forks source link

[Feature] LGSVL Simulator Integration design. #421

Closed hakuturu583 closed 5 years ago

hakuturu583 commented 6 years ago

autowarefoundation/autoware_ai#343 Now. we have no good simulator for Autoware, but l think LG SVL simulator(https://github.com/lgsvl/simulator) will be a one of the best simulators for Autoware. So, I want to integrate this simulator with Autoware and enable autonomous driving simulation.

Design : Connection Interface with LGSVL Simulation.

image

($prefix) is the name of the simulated autonomous driving vehicle. ($sensor_name) is the name of the sensor in simulated autonomous driving vehicle. We add new message package it named autoware_simulation_msgs and enable change simulatior configuration in Autoware. Low level controller interface with Autoware and Simulated Vehicle (autoware_msgs/VehicleStatus and autoware_msgs/VehicleCmd) are now under discussing in autowarefoundation/autoware_ai#396 We will use vehicle interface which will decide in autowarefoundation/autoware_ai#396

Design : autoware_simulation_msgs

I will add new message type to configure simulator through Autoware. New message package autoware_simulation_msgs consists of these messages.

autoware_simulation_msgs/ConfigurationCmd

float32 time_of_day # 0.0~24.0
bool  freeze_time_of_day # default true
float32 fog_intensity # 0.0~1.0
float32 rain_intensity # 0.0~1.0
float32 road_wetness # 0.0~1.0
bool enable_traffic # default true
bool enable_pedestrian # default true
bool enable_high_quality_rendering # default true
uint16 traffic_density # default 500

autoware_simulation_msgs/SimulationStatus

Header header
float32 time_of_day # 0.0~24.0
bool  freeze_time_of_day 
float32 fog_intensity # 0.0~1.0
float32 rain_intensity # 0.0~1.0
float32 road_wetness # 0.0~1.0
bool enable_traffic 
bool enable_pedestrian
bool enable_high_quality_rendering
float32 frame_per_seconds
uint16 traffic_density

Design : UIUX

image

We will add LGSVL Simulator button in simulation tab. When the user clicks the button, pop up configuration window. The user can check local or remote checkbox. If the local was checked, Autoware try to find simulator executable file in local machine and launch it, If Autoware failed to find executable, Autoware download executable from online repository. After that, Autoware launch rosbrige_server for the connection between Autoware and the LGSVL Simulator. If the remote was checked, Autoware launch only launch rosbrige_server for the connection between Autoware and the LGSVL Simulator. Autoware did not launch LGSVL Simulator executable. The user should be click ref button and enter the simulator setting file path. The setting file was written in yaml format and describe simulation setting. For example, sample_setting.yaml

initial_configuration:
  time_of_day : 12.0
  freeze_time_of_day  : true
  fog_intensity : 0.0
  rain_intensity : 0.0
  road_wetness : 0.0
  enable_traffic : true
  enable_pedestrian : true
  enable_high_quality_rendering : true
  traffic_density : 500
vehicles : 
 xe0 : 
   type : xe_riddged
   command_type : twist
   position : 
     x : 0
     y : 0
     z : 0
   orientation : 
     r : 0
     p : 0
     y : 0
 milee : 
   type : milee
   command_type : raw
   position : 
     x : 0
     y : 0
     z : 0
   orientation : 
     r : 0
     p : 0
     y : 0

command_type means that which command in the VehicleCmd was used. type raw means that use accel/brake/gearshift/steer value in VehicleCmd. type twist means that use twist value in VehicleCmd.

dejanpan commented 6 years ago

@hakuturu583 LG has already done or is currently working on the integration of Autoware and LGSV: https://github.com/lgsvl/Autoware. The also seem to be re-basing to the latest Autoware releases: https://github.com/lgsvl/Autoware/commits/lgsvl_develop?after=39c8d199523743ebd32107316f9d119e62aa0d4d+34.

Why are you doing this here from scratch? I strongly recommend to get in touch with e.g. https://github.com/hadiTab and not reinvent things here. OK?

Couple of other remarks:

we have no good simulator for Autoware

Gazebo is a very good simulator, it is just that noone had time to properly integrate it.

(https://github.com/lgsvl/simulator) will be a one of the best simulators for Autoware

How did you conclude this? Did you compare e.g. Carla, Gazebo and LGSVL sim?

Autoware launch rosbrige_server

LGSVL seems to use rosbridge_suite.

hakuturu583 commented 6 years ago

@dejanpan We already have a face to face metting with LGSVL Autonomous Driving Vehicle Simulator development members and we decided to support this simulator. They already have interface with Autoware, but it was a experimental one and does not adjusted enough. So, I want to discuss and decide connection interface with Autoware and the Simultor.

I think gazebo is also a good simulator and easy to integrate with ROS. However, gazebo is not good at rendering images, so I think it is hard to use when you want to generate deep learning dataset or image detection test. I tested Carla and I also feel it is a game engine based good simulator with ROS API. It's rendering is very beautiful and very easy to use. I think it has close feature with LGSVL Simulator. So, I also want to support Carla in the future.

But, compare to Carla developers, LGSVL Simulator developers are very cooperative to us (we are also discussing on Autoware developer slack) and it will help us to develop good simulator quickly. So, I think it is good for us to support LGSVL Simulator first.

hakuturu583 commented 6 years ago

I also take care of multiplicity of use in Simulator interface. I think same interface will be used in other simulators, It will be helpful for simulator and Autoware developers. If you have better idea about simulation interface with Autoware, give us your opinions.

shinpei0208 commented 6 years ago

Autoware does not need to be exclusive with any particular simulator. At this moment, however, LGSVL Simulator is well integrated with the Autoware interface. We are also interested in Gazebo, of course. I think we need a frame-rate improvement on Gazebo. Carla is UE4-based while LGSVL Simulator is Unity-based. Again, we can of course use Carla - I know some people already trying the Autoware interface with Carla.

Last week, LG people agreed that they would synchronize (integration test, etc.) with the future version releases of Autoware. Thus we are working on the clean interface.

dejanpan commented 6 years ago

@hakuturu583 please link to LGSVL interface specification or paste it here and point out what exactly this means:

They already have interface with Autoware, but it was a experimental one and does not adjusted enough. So, I want to discuss and decide connection interface with Autoware and the Simultor.

What does "not adjusted enough" mean? What is wrong with it?

Lets leave the discussion of other simulators aside and focus on the interface and integration of LGSVL simulator here. Lets invite LGSVL people here and let them comment on how exactly their Autoware interface looks at this moment.

Are you talking to https://github.com/hadiTab or any other people listed in this list: https://github.com/lgsvl/Autoware/commits/lgsvl_develop?

Before defining the interface we also need to define what do we want to use the simulator for. This way we will know what is the data that we need from the simulator and what data we need to feed into it. Couple of candidates:

  1. Testing of object detection using LiDAR sensor
  2. Testing of object tracking using LiDAR sensor
  3. Testing of prediction of other participants in the traffic (cars, pedestrians, cyclists, ...)
  4. Testing of motion planning and controls algorithms
  5. Testing of object detection using camera sensor
  6. Generation of data for training of deep learning-based algorithms

Above list is in order of my preference.

Furthermore we also must consider how do we plan to use the simulator:

  1. as a full GUI application on developers' computers?
  2. in a headless mode on a CI server (in this case we will have to properly integrate in Gitlab CI)?
  3. ...
hakuturu583 commented 6 years ago

@dejanpan Their simulator's communicate with ROS by using rosbridge_server and it has such topic APIs.

lgsvl_topics

I found such problems in this interface.

  1. It only has GPS/Lidar/Camera sensor data topics which comes from simulator, and I think it is not enought for Autoware. Autoware can use odometry in localization.
  2. It receives vehicle command, but it can only read twist_command field. So, now, we can't send command such as changing gearshift, setting steering angle through ROS API for Autoware.

I want to list up such kind of things and add necessary API.

I also talking to LGSVL simulator developer in Autoware developer slack and asked them to post comment to this issue.

Are you talking to https://github.com/hadiTab or any other people listed in this list: https://github.com/lgsvl/Autoware/commits/lgsvl_develop?

I also want to no GUI mode... I will send request to the LG people.

Furthermore we also must consider how do we plan to use the simulator: as a full GUI application on developers' computers? in a headless mode on a CI server (in this case we will have to properly integrate in Gitlab CI)? ...

@cirpue49 Could you list up situations and purposes which you want to simulate??

Before defining the interface we also need to define what do we want to use the simulator for. This way we will know what is the data that we need from the simulator and what data we need to feed into it. Couple of candidates: Testing of object detection using LiDAR sensor Testing of object tracking using LiDAR sensor Testing of prediction of other participants in the traffic (cars, pedestrians, cyclists, ...) Testing of motion planning and controls algorithms Testing of object detection using camera sensor Generation of data for training of deep learning-based algorithm

k0suke-murakami commented 6 years ago

Before defining the interface we also need to define what do we want to use the simulator for. This way we will know what is the data that we need from the simulator and what data we need to feed into it. Couple of candidates: Testing of object detection using LiDAR sensor Testing of object tracking using LiDAR sensor Testing of prediction of other participants in the traffic (cars, pedestrians, cyclists, ...) Testing of motion planning and controls algorithms Testing of object detection using camera sensor Generation of data for training of deep learning-based algorithm

Purpose: Test traking and prediction. And generate training data for CNN

Data we need from the simulator: sensor_msgs::PointCloud2: Raw pointcloud data autoware_msgs::DetectedObject: For each objects/obstacles, geometry_msgs::pose, geometry_msgs::Vector3 for Bounding Box's dimensions, geometry_msgs::Vector3 for velocity, sensor_msgs::PointCloud2 for pointcloud.

hakuturu583 commented 6 years ago

@cirpue49 I think ground truth data is too large, so I think it will be better to logging ground truth data in rosbag file and does not publish to ROS network. Do you agree to this design??

k0suke-murakami commented 6 years ago

@hakuturu583 Oh I missed that part. Is there any reasons for we could not get ground truth info from the simulator directly?

One of the advantages for using a simulator is to test planning algorithm in a dynamic situation. In the case of testing planning module, it is inconvient to only have access to the raw sensor data.

For testing planning modules independetly from perception modules, we need ground truth information for obstacles directly from the simulator.

What do you think?

And for the purpose of testing planning module, we might need tf information as well.

mitsudome-r commented 6 years ago

@hakuturu583 @cirpue49

I think ground truth data is too large

For planning module, we don't need ground truth data for all the objects. If we have ground truth autoware_msgs::DetectedObject just for people and vehicles(i.e. moving obstacles), that would be very useful to test planning module.

zelenkovsky commented 6 years ago

Hello All,

We are very pleased to hear that our Simulator tends to be very useful for Autoware developers. @hakuturu583 thanks for the proposal, I'd like to re-assure that we are fully committed to support it.

And there are a few points I'd like to suggest, Let's separate integration into 3 different issues:

1) Basic integration: will include launch Simulator button, static yml file configuration for simulator, odometry messages and other required information. 2) Advanced integration: will include dynamic configuration for weather and other parameters 3) Training with Simulator: Ground truth collection and extra topics publishing.

Since we are time constrained, I would suggest to make sure we complete 1) before next release in Dec and proceed with 2) and 3) during subsequent release.

@hakuturu583 what do you think? Do you mind to split this issue into 3?

zelenkovsky commented 6 years ago

We can keep this one, but move requirements for dynamic Simulator configuration and ground truth collection into another 2 issues.

hakuturu583 commented 6 years ago

@cirpue49 Thanks! I also think it is necessary for us publish ground truth in real time.

So, I want to add new field and solve this problem.

float32 time_of_day # 0.0~24.0
bool  freeze_time_of_day # default true
float32 fog_intensity # 0.0~1.0
float32 rain_intensity # 0.0~1.0
float32 road_wetness # 0.0~1.0
bool enable_traffic # default true
bool enable_pedestrian # default true
bool enable_high_quality_rendering # default true
uint16 traffic_density # default 500
bool publish_ground_truth #default false

If the publish publish_ground_truth command was true, the simulator publish all ground truth data which contains rosbag file.

image

hakuturu583 commented 6 years ago

@mitsudome-r I think this simulator is used for not only planning. So, it is important for us to publish or log data as much as possible.

hakuturu583 commented 6 years ago

@zelenkovsky OK !! I split issues about Ground Truth collection.

hadiTab commented 6 years ago

@dejanpan we have been in touch with the Autoware team and have been collecting requirements and developing features required for this integration. We consider the simulator to be a work in progress and appreciate all the feedback we can get which will be taken into account for future development.

At the moment the simulator runs as a GUI application, either on the developer’s machine or on another machine. Moving forward we may add other modes of operation based on requirements. We understand that it would be useful to be able to test individual modules with the simulator and supporting this is on our road-map (i.e. providing ground truth data). I believe much of the requirements you have proposed fall into this category.

As for the Autoware fork we are maintaining on github, it includes modifications intended to facilitate the use of Autoware with the simulator. With the integration proposed here users should be able to use the simulator with the official Autoware repository.

zelenkovsky commented 6 years ago

@hakuturu583 Thanks. I'll keep commenting in other 2 issues after split. Just last 2 cents regarding Ground Truth.

@cirpue49 Thanks! I also think it is necessary for us publish ground truth in real time. So, I want to add new field and solve this problem.

I would suggest that we always publish ground truth in Training Mode via ROS messages and prepare a separate script for developer to collect and synchronize between messages. We have similar script already and we successfully used that to complete End-To-End Navigation training using camera and steering wheel. But let's details after issues are split, there are few other good ideas.

hakuturu583 commented 6 years ago

@zelenkovsky I changed message namespace a little. I forgot that the prefix is necessary for ObjectBbox2D message. image

dejanpan commented 6 years ago

@zelenkovsky @cirpue49 @hadiTab @hakuturu583 thanks for your inputs, this is exactly what I was hoping for. Fruitful discussions like this were happening way too little in Autoware.

I will first reply to each of you and then add an item on my own.


@hakuturu583: By rosbridge_server you mean: http://wiki.ros.org/rosbridge_server?

I think ground truth data is too large, so I think it will be better to logging ground truth data in rosbag file and does not publish to ROS network.

I am not exactly sure what do you mean here but the way that rosbag record works in ROS 1 is that it subscribes to rostopics and then stores the data onto the harddrive. So the data still goes through TCP or UDP.

It is another thing if you use rosbag API but I do not think that you refer to it, right?


@cirpue49

Purpose: Test traking and prediction. And generate training data for CNN

Do you mean that you would like to:

  1. test tracking and prediction of your algorithms?
  2. generate data for training of CNNs?

Is this any different from the items in my list?

One of the advantages for using a simulator is to test planning algorithm in a dynamic situation. In the case of testing planning module, it is inconvient to only have access to the raw sensor data. For testing planning modules independetly from perception modules, we need ground truth information for obstacles directly from the simulator.

I think that this is a great idea. I would suggest to open another ticket and to describe exactly:

  1. Which motion planning algorithms do you plan to test
  2. Which nodes are involved
  3. Which data you need from and into the simulator (e.g. object list with ego motions, free space, ...)
  4. What kind of world in the simulator you need
  5. What kind of scenarios do you want (e.g. your car driving around the parked car on the lane, pedestrians crossing the street, ...) and how do you want them randomized
  6. What kind of metrics do you want to test against (cross track error, number of collisions, distance to other objects, ...)

@zelenkovsky regarding your comment: I would like to see a bigger picture, for instance something like this: http://gazebosim.org/tutorials?tut=ros_overview&cat=connect_ros. Gazebo is the best integrated ROS simulator and many robots are supported. It has matured for > 10 years to come to this architecture.

There is also no shame or hard feelings if we just copy Gazebo design.

As you can see there is:

  1. stand alone simulator Gazebo
  2. gazebo_ros package with API and paths plugin that can start/stop/pause the simulator and provide basic topics (clock, link and model states), services (start/stop/pause) and parameters (whether to run in sim clock mode or not). It also provide various worlds.
  3. gazebo_msgs
  4. gazebo_plugins (for sensor data, dynamics and dynamic reconfigure)

Gazebo also has a complete documentation which with the architecture above makes it really easy to add/remove things to the integration.

With that in mind I would propose a following tweak to your plan:

  1. Basic integration: will include launch Simulator button, static yml file configuration for simulator, odometry messages and other required information.

  2. Advanced integration: will include dynamic configuration for weather and other parameters

  3. Training with Simulator: Ground truth collection and extra topics publishing.

  1. Comment on whether above is possible and create an architectural diagram (ala https://bitbucket.org/osrf/gazebo_tutorials/raw/default/ros_overview/figs/775px-Gazebo_ros_api.png) (this ticket)
  2. Copy and adjust https://github.com/ros-simulation/gazebo_ros_pkgs (separate ticket)
  3. Basic integration: will include launch Simulator button, static yml file configuration for simulator, odometry messages and other required information (separate ticket)
  4. Advanced integration: will include dynamic configuration for weather and other parameters (separate ticket)
  5. Training with Simulator: Ground truth collection and extra topics publishing (separate ticket)

In the further tickets we could then proceed on to support various other use cases listed in https://github.com/CPFL/Autoware/issues/1724#issuecomment-441739827.

dejanpan commented 6 years ago

@zelenkovsky @hadiTab I have also 2 more questions:

  1. In what language do you guys model your worlds and cars? Gazebo is using http://gazebosim.org/tutorials?tut=ros_urdf&cat=connect_ros for both.

  2. How do you guys publish tf information (wheel position, map-base_link transform)?

hakuturu583 commented 6 years ago

@dejanpan

I am not exactly sure what do you mean here but the way that rosbag record works in ROS 1 is that it subscribes to rostopics and then stores the data onto the harddrive. So the data still goes through TCP or UDP. It is another thing if you use rosbag API but I do not think that you refer to it, right?

Yes, you are right, I also think ground truth data will be huge size for rosbridge server. So, I seted the default value of publish_ground_truth field's as false.

float32 time_of_day # 0.0~24.0
bool  freeze_time_of_day # default true
float32 fog_intensity # 0.0~1.0
float32 rain_intensity # 0.0~1.0
float32 road_wetness # 0.0~1.0
bool enable_traffic # default true
bool enable_pedestrian # default true
bool enable_high_quality_rendering # default true
uint16 traffic_density # default 500
bool publish_ground_truth #default false

I think when the user want to use ground truth while ruining simulator, they turn on this parameter.

When logging ground truth rosbag data, I think rosbag API can be useful.

zelenkovsky commented 6 years ago

@dejanpan I agree architecture diagram would be very helpful. I'll work on that and publish it in our GitHub repo. I'll provide a link afterwards. Regarding the rest of questions, let me tackle them one by one:

1) There is no argument that Gazibo has a long history and a lot of good ideas to borrow. We definitely should consider that while making architectural decisions. Gazibo is a general purpose robot simulator with full ROS support and good ROS integration. In the contrary, LGSVL Simulator is not a general robot simulator and we are not trying to be general robotics simulator. We are trying to be Autonomous Vehicle simulator with simple and clear goals: make automotive platform developer's life easier, support most popular open source platforms, be out-of-the-box solution. We expect that we do hard work of integration and world creation, provide existing available on the market set of sensors for developer to choose. Also we want to allow developers to collect data for training and create test cases scenarios. That's the goal of the project so far. I'd like to emphasize that we would like to support several autonomous vehicle platform, even through Autoware is among our favorites.

2) Taking all mentioned above into account, let's continue. We do not link against ROS or ROS2 libraries directly, but rather use ROSBridge to communicate with AD system. That gives a lot of flexibility: we can support ROS1, ROS1+Protobuf and ROS2 almost without any efforts from the Simulator side. We can run Simulation on Windows, which provides performance and visual benefits, since Unity3D engine is better optimized for NVidia drivers on Windows than Linux.

3) We definitely want to support dynamic simulator configuration and control (weather, start/stop and etc.) from AD side, but I suggest to think about that interface a little bit more. Since our goal is to be AD independent, maybe this interface should not be even based on ROS. That could be HTTP REST API or anything else. I'll keep it as open question for now.

4) Data collection and training: This feature is one of important tasks for AD developer and environment for data collection usually is a bit different than simply for simulation. Data collection could be done slower then real-time or faster then real-time, it does not affect the quality of the data. Usually developer does not ran AD while collecting the data or runs it in a simplified way. At the same time Simulator should produce a lot of extra information, usually not generated during ordinary run. So we are planning to have a separate Training Mode for the Simulator, but has not decided yet, how it should look like. Right now we export already semantically segmented images, images from depth camera and few other things. We want to export the data in a platform independent way, plus for developer that could be more convenient to use existing tools to record and manipulate ROS bags. This is why we suggest to use rosbag command line and further process the ROS bag with the set of our scripts to do message association (synchronization).

5) We model our environment in Unity3D, use C# and C++ for low level code. We create vehicle models in Maya or other 3D designer tools.

6) Yes we publish tf map base_link transform. Actually you can press F1 during simulation to see list of topics we publish and subscribe. Each vehicle has it's own unique number of sensors and control topics. Actually we assume that developer (at least for now) who wants to modify vehicle should open our Simulator code in Unity3D editor and change the configuration of the vehicle prefab. It is quite easy, plus Unity Editor is very good for visualizing sensor position and so on.

So far I believe I tackled most of questions, please let me know if there is anything missing ;)

mitsudome-r commented 6 years ago

@hakuturu583 The /clock topic seems to be published from Autoware to the simulater. Isn't this the other way around?

hakuturu583 commented 6 years ago

@mitsudome-r

The /clock topic seems to be published from Autoware to the simulater. Isn't this the other way around?

It was my mistake, right connection diagram is here. image

dejanpan commented 6 years ago

@zelenkovsky thx for reply.

I will provide a very long answer. Before I do so, can you just confirm that when you talk about ROSBridge you mean https://github.com/RobotWebTools/rosbridge_suite/tree/develop/rosbridge_server ?

martins-mozeiko commented 6 years ago

That's correct. That is the original rosbridge for ROS1 that we use in our LGSVL simulator to connect to Autoware.

Note that we recommend using our forked version here: https://github.com/lgsvl/rosbridge_suite It has few modifications to improve performance. But the original should also work fine.

Additionally our fork has extra modifications to be able to talk to Apollo over ROS. Apollo uses modified ROS1 - where they added support for protobuf as message serialization. We implemented some minimal support for protobuf message serialization and unserialization in rosbridge_suite python code. This is not needed for Autoware.

And if you are interested in ROS2, then you need to use following implementation: https://github.com/RobotWebTools/ros2-web-bridge It does same things as rosbridge_suite - supports exactly same JSON protocol over websocket to outside. It is implemented in Node.js, not Python.

dejanpan commented 6 years ago

@zelenkovsky @martins-mozeiko my reply to above 2 comments from you.

Let me just pre-face all of this that I do not want to suggest what and how you should do things in LGSVL simulator but I read some things in your comments which do not fit to my understanding of robotics and AD which I gathered over the last 15 years.

I am also not advocating for Gazebo, I am just comparing to what I know the most.

Gazibo is a general purpose robot simulator with full ROS support and good ROS integration. In the contrary, LGSVL Simulator is not a general robot simulator and we are not trying to be general robotics simulator. We are trying to be Autonomous Vehicle simulator with simple and clear goals: make automotive platform developer's life easier, support most popular open source platforms, be out-of-the-box solution. We expect that we do hard work of integration and world creation, provide existing available on the market set of sensors for developer to choose.

In my understanding an autonomous car is a robot: it has many sensors, it has actuator(s), it has a set of ECUs and it needs to do sense-plan-act job. A simulator that supports such a use case must in my POV have these components:

  1. dynamics and kinematics simulation
  2. 3D graphics for rendering
  3. sensor models
  4. robot (car) models
  5. middleware for internal message passing
  6. some tools for e.g. editing of worlds and introspection of processes
  7. IO to external frameworks such as ROS
  8. mechanism for extensibility (e.g. for adding a new car or a new sensor model)

Do you agree with it? If yes in what way LGSVL simulator then specializes to be the Autonomous Vehicle simulator?

  1. sensor models for sensors used in AD?
  2. world models that include roads (cities, highways)?
  3. world models that are generated from (or consistent with) AD maps? 1 ...

Gazebo supports 1 and 2 above but AFAIK not 3.

We do not link against ROS or ROS2 libraries directly, but rather use ROSBridge to communicate with AD system.

Gazebo also does not link to either ROS 1 or ROS 2 directly. It does so over https://github.com/ros-simulation/gazebo_ros_pkgs. In case of https://github.com/lgsvl/rosbridge_suite, the rosbridge_suite links to either ROS 1 or ROS 2.
So either way you can not avoid linking against a framework you are integrating into at some point. But you do not link against the simulator itself.

The other problem that I have with rosbridge_suite is that it does JSON string serialization/de-serialization. I have never seen that be performant enough for large data, e.g. see also Table 1 here.

I have not used rosbridge_suite in several years but I would be curious if it supports this throughput (typical AD car) which ROS 2 natively does:

Large Data:

  1. 3 Topics with 1.6 MB size @ 30Hz, 1 pub to 1 sub for each
  2. 1 Topic with 6 MB size @ 10 Hz, 1 pub to 1 sub
  3. 1 Topic with 0.5 MB size @ 120Hz, 1 pub to 1 sub
  4. 5 Topics with 0.25 MB size @ 40Hz, 1 pub to 1 sub

Small Data:

  1. 100 Topics with 256 Byte size @ 100Hz, 1 publisher to 50 subscribers

We can run Simulation on Windows, which provides performance and visual benefits, since Unity3D engine is better optimized for NVidia drivers on Windows than Linux.

ROS 2 now also runs on Windows, so at least for the LGSVL Simulator integration into https://gitlab.com/AutowareAuto/AutowareAuto we could use a clone of https://github.com/ros-simulation/gazebo_ros_pkgs/tree/ros2.

Since our goal is to be AD independent, maybe this interface should not be even based on ROS. That could be HTTP REST API or anything else.

To be honest 80% of industry is using ROS and rightfully so since it is the best framework overall. For Autoware.ai and Autoware.Auto there is no need to think about non-ROS solutions. Hence my proposal to replicate gazebo_ros project stands.

We want to export the data in a platform independent way, plus for developer that could be more convenient to use existing tools to record and manipulate ROS bags. This is why we suggest to use rosbag command line and further process the ROS bag with the set of our scripts to do message association (synchronization).

What is your definition of a platform? OS, Schema Language, Data Encoding, Backend Storage?

In any case rosbag is not platform independent, however for Autoware a preferred and widely accepted option. There is also a plethora of tools for rosbag manipulation and viewing. rosbag2 will also have significant improvements.

We create vehicle models in Maya or other 3D designer tools.

I am sorry, I should've been more specific. What I mean by a model is a format where you describe car's kinematic and dynamic properties. In ROS URDF or SDF are used.

Also where or how do you specify coordinate transforms for sensors, wheel joints, map and base_link. In ROS, again, you use URDF or SDF are used. This then serves as input to robot_state_publisher to get the tf transforms onto the tf topic.

To recap I think that the most important features for the integration into Autoware (ROS) are:

  1. we can transport at least above mentioned amount data from the simulator into our algorithms
  2. it should be easy to define car model (kinematics, dynamics)
  3. it should be easy to change coordinate transforms
  4. we should be able to use standard ROS messages
  5. it should be easy to write new sensor models
  6. it should be possible to start/stop/pause the simulator as well as start it headlessly
andytolst commented 5 years ago

@hakuturu583 I'm changing parameters file (sample_setting.yaml) to better align with the recent changes in Simulator (we are separating sensors for different vehicles)

So it will be something like this:

initial_configuration:
  map : SanFrancisco
  time_of_day : 17.0
  freeze_time_of_day  : true
  fog_intensity : 0.1
  rain_intensity : 0.6
  road_wetness : 0.5
  enable_traffic : true
  enable_pedestrian : true
  traffic_density : 300
vehicles : 
  - type : XE_Rigged-autoware
    address : localhost
    port : 9090
    command_type : twist // not supported yet
    enable_lidar : true
    enable_gps : true
    enable_main_camera : true
    enable_high_quality_rendering : true
    position : // not supported yet
      x : 0.0
      y : 0.0
      z : 0.0
    orientation : // not supported yet
      x : 0.0
      y : 0.0
      z : 0.0
  - type : XE_Rigged-apollo
    address : localhost
    port : 9091
    command_type : twist // not supported yet
    enable_lidar : false
    enable_gps : false
    enable_main_camera : false
    enable_high_quality_rendering : false
    position : // not supported yet
      x : 0.0
      y : 0.0
      z : 0.0
    orientation : // not supported yet
      x : 0.0
      y : 0.0
      z : 0.0

I will also place sample config file to the root of the simulator source on github.

Simulator should be launched like this:

./simulator --config autoware/ros/src/.config/simulator/static_config_sample.yaml
hakuturu583 commented 5 years ago

@andytolst Thanks!! Now, your simulator cannot set initial position of Autonomous Driving Vehicle??

andytolst commented 5 years ago

@hakuturu583 we are still discussing what’s the best way of providing initial position. Setting arbitrary x,y in virtual world coordinate system is easy, but it could easily lead to placing the car inside the building, and there is no easy way to know coordinates without launching Unity editor.

UTM or lay/lon makes more sense IMO, but it has its own complications

hakuturu583 commented 5 years ago

@andytolst I think it is better to spawn Autonommous Driving Vehicle without open Unity Editor. It is very easy to convert car position in Autoware to UTM pose. Is it difficult for your simulator to spawn Autonomos Driving Vehicle through ROS API or parameters??

andytolst commented 5 years ago

@hakuturu583 It should not be very hard, Unity coordinate system is in meters, so it's just a matter of adding/subtracting a proper offset for UTM coordinates. I'll try that.

spawning Vehicle dynamically through ROS API (or some other API) should be possible, but this must be handled as separate task.

andytolst commented 5 years ago

@hakuturu583 final version, will be available in the upcoming release:

initial_configuration:
  map : SanFrancisco
  time_of_day : 17.0
  freeze_time_of_day  : true
  fog_intensity : 0.1
  rain_intensity : 0.6
  road_wetness : 0.5
  enable_traffic : true
  enable_pedestrian : true
  traffic_density : 300
vehicles : 
  - type : XE_Rigged-autoware
    address : localhost
    port : 9090
    command_type : twist // not supported yet
    enable_lidar : true
    enable_gps : true
    enable_main_camera : true
    enable_high_quality_rendering : true
    position : 
      n : 4140310.4   // Northing, Easting in UTM coordinates + Height 
      e : 590681.5    // Set to 0.0 to use the default position
      h : 10.1
    orientation : 
      r : 0.0
      p : 0.0
      y : 269.9
  - type : XE_Rigged-apollo
    address : localhost
    port : 9091
    command_type : twist // not supported yet
    enable_lidar : false
    enable_gps : false
    enable_main_camera : false
    enable_high_quality_rendering : false
    position : 
      n : 4182779
      e : 52880
      h : 10.1
    orientation : 
      r : 0.0
      p : 0.0
      y : 269.9