carla-simulator / carla

Open-source simulator for autonomous driving research.
http://carla.org
MIT License
11.27k stars 3.65k forks source link

Can't get all generated frames #6240

Open pgfrsght opened 1 year ago

pgfrsght commented 1 year ago

Hi All,

Unfortunately I can't get all frames from my CARLA, only if I set lower resolution (320x240) I can successfully get all of them. My frames list looks like: 001177.png 001178.png 001179.png 001180.png 001181.png 001182.png 001185.png 001186.png 001187.png 001188.png 001193.png 001194.png 001209.png 001210.png 001228.png 001236.png 001237.png 001260.png 001269.png 001289.png 001290.png 001291.png 001306.png 001314.png 001323.png 001324.png 001340.png 001341.png 001354.png 001369.png ... by running tutorial.py from examples or any other flow with _image.save_todisk

Setup: CARLA version: 0.9.13/12 Platform/OS: ubuntu 20.04 RAM: 64Gb NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0

Any idea? Thanks

p-testolina commented 1 year ago

Hi, You need to run the simulation in synchrounous mode to make sure all the data is generated and retrieved between simulation steps. This is explained in the documentation (https://carla.readthedocs.io/en/0.9.14/adv_synchrony_timestep/) and a complete example is given in sensor_synchronization.py. Here's the relevant part of the documentation "Using synchronous mode

The synchronous mode becomes specially relevant with slow client applications, and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait, there will be an overflow of information. The client will not be able to manage everything, and it will be lost or mixed. On a similar tune, with many sensors and asynchrony, it would be impossible to know if all the sensors are using data from the same moment in the simulation.

The following fragment of code extends the previous one. The client creates a camera sensor, stores the image data of the current step in a queue, and ticks the server after retrieving it from the queue. A more complex example regarding several sensors can be found here.

settings = world.get_settings()
settings.synchronous_mode = True
world.apply_settings(settings)

camera = world.spawn_actor(blueprint, transform)
image_queue = queue.Queue()
camera.listen(image_queue.put)

while True:
    world.tick()
    image = image_queue.get()

"

pgfrsght commented 1 year ago

Great! Thanks @p-testolina

stale[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.