Closed nsubiron closed 6 years ago
How would be an example interaction from the client side with the new architecture ? This will help us on the definition of #428, client side agents. For instance right now we have this interaction:
with make_carla_client(args.host, args.port) as client:
settings = CarlaSettings()
scene = client.load_settings(settings)
client.start_episode(player_start)
for i in range(number_of_iteration):
measurements, sensor_data = client.read_data()
# Do something with measurements
client.send_control()
How is this going to change ?
@felipecode It is still early as to say something definitive, but what I have in mind is something like this
client = CarlaClient("localhost", 2000)
client.start_episode(CarlaSettings())
vehicle = client.spawn_vehicle("mustang", Transform(x=100.0, y=100.0))
camera = client.spawn_camera(attach_to=vehicle.id, Transform(z=50))
for i in range(number_of_iteration):
vehicle.apply_control(throttle=0.5, steer=0.0)
image = camera.read_data()
Amazing @nsubiron . I like how we are going to remove the concept of the client receiving and sending control commands. Now, this is a task of vehicles and sensors. Take into account that many of these objects can exist into world. I will consider this on #428 .
1) Request-response model for controlling the simulation ...
- Everything should be recorded so we can do a replay
I am interested in being able to replay a session with the exact same actor positions but with different lighting, weather, or sensor configuration. Is that the kind of activity you intend to support? Will this work include an API for actually running a session from recorded actor poses, or is that for future work?
Yes, that's the kind of things we want to support in the future. For the moment (this issue), I'm just implementing the infrastructure to allow finer control over every actor in the scene.
For the record, you can already reply an episode with the same position of other vehicles and pedestrians but different weather or sensors. Just that the behaviour of NPCs is controlled by a random seed.
Introduction
There are few key features that we want to add to CARLA
Here by agents we refer to all the vehicles, pedestrians, traffic lights/signs, and other things that could be added in the future that are present in the scene; basically anything that can be controlled and affects up to some extent the driving.
Implementing these features with the current design of the networking communication between the client and the simulator would be near impossible. As it is right now the network protocol is very strict and not at all fail proof, it is assumed everywhere that there is a single client making request at a certain timely order. Given this, it seems like the perfect time for designing an scalable and fail proof architecture that can serve us in the long run.
So I'm proposing the following requirements that the new architecture should comply with
Main design requirements
First of all, there will be two different ways a client can communicate with the server, 1) general requests and 2) data streaming (optimized for speed)
1) Request-response model for controlling the simulation
2) Data streaming for sending the sensor data
Where the level 1, 2, and 3 refer to the different definitions of an scenario
Use example
Open questions
Changes required in Unreal-side code