carla-simulator / scenario_runner

Traffic scenario definition and execution engine
https://carla-scenariorunner.readthedocs.io/en/latest/
MIT License
519 stars 357 forks source link

Clarify the Usage when Performance Testing with Scenarios #843

Closed Bonifatius94 closed 2 years ago

Bonifatius94 commented 2 years ago

Describe your question or idea I'm pretty fascinated that there's a really nice component in CARLA for specifying automated test scenarios. However, as I'm trying to implement a self-driving car software, it's not so clear to me how I can use the scenario runner to do some sophisticated performance testing and software quality measurements.

As I understood, your scenario runner spawns vehicles and pedestrians, defines scenario termination conditions, etc. But how can I create a specific car I'm developing my self-driving software for? And how can I have that car spawn with all my sensors? I haven't seen any sensors on the car that spawns in your example scenarios.

How is this supposed to work? And more importantly, what is the point in using your scenario runner if it doesn't support the main use case which is spawning a specific car with sensors and let it be remote-controlled by some self-driving car software? (I hope I'm not missing out on something and getting things completely wrong, sorry if so)

Expected behavior I'd like to have a clear example on how your software is supposed to be used by someone who's developing a self-driving car software. Especially, I'd expect an example on how to spawn a specific ego vehicle with specific sensors. The current scenario runner documentation is more like a tutorial on how to play GTA with the CARLA simulator (which is cool, don't get me wrong), but it's also not particularly useful for creating a self-driving car software.

Desktop (please complete the following information):

Context: I'm working on a self-driving car software project as part of a seminar at the University of Augsburg. We're using CARLA + ROS (everything dockerized, CI/CD pipelines, ROS microservices, infrastructure-as-code, etc).

afterimagex commented 2 years ago

Maybe you can try to write your own agent.

Bonifatius94 commented 2 years ago

Thanks for replying πŸ˜„ Ehm yeah, I'm pretty shocked that the CARLA community did miss out on this really obvious use case πŸ€”

You know, I need this feature really badly, so I'll give it a try anyways. I'll let you know about my progress πŸ˜‰πŸ‘¨πŸ»β€πŸ’»

Brandeyy commented 2 years ago

I feel like most of the people that use CARLA know how to do this themselves. That is why there is no specific tutorial for that. I am struggling with the same issue and already opened an issue as well on this.

I think you have to modify the manual_control script or one of the agent scripts that control the cars in a scenario. But I am lacking the skills to do that myself.

So if you make any progress with your problem I would be really greatfull if you could share your progress in this issue.

Bonifatius94 commented 2 years ago

Having had a closer look at the code, I can say that the scenario runner does lots of stuff that it shouldn't. It needs some refactoring to address some vicious coupling.

@afterimagex I think that implementing an agent deriving from AutonomousAgent is not a good solution. In my case I neither need the scenario runner to provide me with car sensor information nor do I want to control a vehicle with it. I'm running the CARLA ROS bridge with Ackermann control, so I just want to spawn a car with my specific sensors and remote control it with Ackermann via ROS. For my usecase the scenario runner should just launch everything and observe what my car is doing and for that I don't need an agent. This requires some refactoring of the scenario runner itself, so it doesn't do all this weird stuff that's coupled together.

@Brandeyy I'm open sourcing my project in a couple of months, so you'll have a blueprint if you can wait. (I'm not sure if having a proper scenario runner is worth all this work in the context of my project, but if I do put the work in, I'll let you know)

fabianoboril commented 2 years ago

Hi @Bonifatius94:

thanks for brining up this question. In fact, with SR this can be "easily" achieved. Yet, depending on which scenario definition you use, you have to follow different paths.

First option as you use the ROS-Bridge already: https://carla.readthedocs.io/projects/ros-bridge/en/latest/carla_ros_scenario_runner/ There is an integration within the ROS Bridge to spawn an ego vehicle that is controlled through the ROS Bridge. The code is placed here: https://github.com/carla-simulator/ros-bridge/tree/master/carla_ros_scenario_runner You may modify the underlying controller code to attach sensors to the CARLA actor.

The 2nd option, if you want to use SR and OpenSCENARIO: Here you can find an example controller: https://github.com/carla-simulator/scenario_runner/blob/master/srunner/scenariomanager/actorcontrols/simple_vehicle_control.py You can use this as blueprint to understand how a vehicle can be controlled and how sensors (here cameras) can be attached. To use such a controller you only need to update few lines in the XOCS file: https://github.com/carla-simulator/scenario_runner/blob/master/srunner/examples/OscControllerExample.xosc

`

`
Bonifatius94 commented 2 years ago

Hi @fabianoboril, thank you very much for replying.

I've seen you responding this exact same message to other guys asking about the same question. I do appreciate what you write, but it's just that I cannot fit the puzzle together unfortunately.

For me, it's about a conrete example configuration that's properly working. I'm sorry, but I cannot figure out how to do this just from your message. I believe you that it's possible, but I'm not willing to spend another week on getting the scenario runner to work. I need to work on the actual car software as well ^^

Let's please spend some time on a dummy script / detailed enough example that's spinning all components up like in an integration test. This would help you improve your release pipelines as well because you can determine whether your components work fine with each other. I've seen you're doing some mock ups for testing recently, but wouldn't it be even better if you'd test against the real components? πŸ˜‰

You know, I'd consider myself a technical person and even I couldn't figure it out, even after working an entire week on this. Doesn't seem to be "easy" to me πŸ˜…

Best wishes Marco

fpasch commented 2 years ago

As you're using ROS and therefore most likely the ros-bridge anyway, you might want to try carla_ros_scenario_runner which is a wrapper around scenario runner.

Give it a try by following the launch instructions of carla_ad_demo and click on the "Execute Scenario" Button in rviz.

Bonifatius94 commented 2 years ago

@fpasch thanks for replying. I would never have expected an example be located at carla_ad_demo. Anyways, looks promising. I'm already using the ROS bridge scenario runner, but it didn't work. I'll let you know whether I got it to work. If so, I'll provide my instructions to improve the docs πŸ˜‰ Best wishes, Marco

glopezdiest commented 2 years ago

The way I see Scenario Runner is that it handles everything in the simulation except for the ego, and that's why most of our example don't use sensors, because the scenario vehicles don't use those. With that being said, you can make SR control the ego vehicle, but that needs a tiny bit of tuning to do.

That's why you have to do two commands to run each and every scenario. The first command runs scenario_runner.py, setting all the simulation, and makes it wait for the ego vehicle to move. With that ready, you can now connect you AV stack to the ego vehicle and controls it however you want. In all of our example, that AV stack is the manual control, which is why our example always tell you launch the scenario and then launch the manual control

The manual control connects to the ego vehicle here and some lines below, sets up all the sensors needed so that the user, which is taking the place of the AV stack, can run it efficiently.

For your use-case, you just have to replace that manual control behavior, with any code you want.

Bonifatius94 commented 2 years ago

Thanks for the little hint with carla_ad_demo @fpasch. This did actually work for me once I figured out how to build the ROS bridge and scenario runner from scratch inside my Dockerfile.

As already said, I'll publish my approach in a few months once our university challenge is over.

Thanks to all replying to this issue πŸ˜‰

Bonifatius94 commented 2 years ago

For all who want to know how I did it.

1) Include the pre-built CARLA PythonAPI and register it in the PYTHONPATH 2) Compile the scenario_runner from source 3) Compile the CARLA rosbridge from source 4) Launch rviz with pre-configured "Ad Demo" UI for running scenarios (idk if it's still the same for versions > 0.9.10.1)

See scripted commands: https://github.com/ll7/paf21-1/blob/final-version-live-demo/components/carla_ros_bridge/Dockerfile

PS: There's no (easy) way to effectively run unattended performance tests. Have fun with the CLI (wasted me half a week).

ronyshaji commented 1 year ago

@Bonifatius94 Did the scenarios work for you in ad_demo?

Bonifatius94 commented 1 year ago

Did the scenarios work for you in ad_demo?

My team ended up not using the scenario runner because we didn't want to invest out time into learning how to create scenarios with a tool that works flaky at best, cannot load custom vehicles (without major work) and doesn't fit our approach with ROS properly.

But there may have been changes since May 2022. So give it a try anyways if you're curious to find out. There are not so many alternatives for HQ visual rendered simulators after all, so you'll probably need to stick with CARLA.

ronyshaji commented 1 year ago

@Bonifatius94 I just checked your repo for the project and found that the vehicle is controlled by the ackerman. How do you define the path for the vehicle? Does it based on the waypoints or somehow utilised the perception to control the vehicle? and Thanks for the reply for the old post.

Bonifatius94 commented 1 year ago

I just checked your repo for the project and found that the vehicle is controlled by the ackerman. How do you define the path for the vehicle? Does it based on the waypoints or somehow utilised the perception to control the vehicle? and Thanks for the reply for the old post.

Sorry for others who read this as it's a bit off-topic here. But yeah, my simulator supports 2 kinematics: differential drive and bicycle model. The routes are defined in a config file as a list of waypoints which have to be reached one after the other. But the interface of the model is designed that it only knows the relative direction and distance to the next 2 waypoints, so you can exchange the fixed routes during training with a proper global planner lateron once the model is trained. And for the sensor, it uses radial LiDAR rays. The simulation is just 2D, so no real comparison to CARLA anyways, but it fits my requirements for my master thesis that I'm trying to get out of the door until Monday evening :joy:

ronyshaji commented 1 year ago

@Bonifatius94 Its bad that no private message facility available here. My plan is to create an openscenario file which describes the movement of the vehicle forward. (manually or using AD demo). Record the odometry details of the vehicle and do a reverse using the recorded points controlled by the ackerman. I guess its not much similar to what you did. But thanks for your time for helping me out. Cheers von Stuttgart

Bonifatius94 commented 1 year ago

My plan is to create an openscenario file which describes the movement of the vehicle forward. (manually or using AD demo). Record the odometry details of the vehicle and do a reverse using the recorded points controlled by the ackerman.

I think my simulator could be exactly what you want in case the sensor information suffices for your purposes. In fact, it was explicitly designed to facilitate scenario-based evaluation by loading the pedestrian dynamics of social force, defining the routes of pedestrians as waypoints and having the behavior of the vehicle as a trained model via reinforcement learning.

You probably don't want to record the waypoints as the behavior is too static. Rather train a model and let it pick appropriate actions for both vehicle and pedestrians. That way, you'll get more realistic scenarios that adapt to the individual variations in selected actions.

ronyshaji commented 1 year ago

@Bonifatius94 Thanks for the answer. I will check the docker and back if any comments. Also I am a beginner and all is little bit too much. So I guess i will study everything based on your feedbacks.