rstrivedi / Melting-Pot-Contest-2023

Apache License 2.0
41 stars 67 forks source link

Is there faster way to see scenario evaluation than generating video? #11

Open kuddai opened 1 year ago

kuddai commented 1 year ago

Hello! In my case generating video

python baselines/evaluation/evaluate.py --num_episodes 1 --eval_on_scenario 1 --scenario allelopathic_harvest__open_0 REST_OF_ARGUMENTS

takes ~10 minutes.

Before it was vp90 codec which compressed really well, but took 3+ second per to encode per image. One episode took ~40 minutes to generate. I have swaped vp90 with mp4v codec and now it takes only 0.3 second per frame, but the env takes ~1 second per step. It takes 20 minutes to finish the game and generate final video.

Is there any already available way to record environment faster? Mayber better codec, or make stepping faster? I see that gpu is barely utilized. Maybe record each agent as a step/game state save it into log and then play it in game/environment runner (like in replays in videogames such as Lux-AI competition on kaggle, quake, dota 2, counter-strike, etc.).

duenez commented 11 months ago

You can always put an observer on the scenario observables substrate.timestep.observation[0]['WORLD.RGB'] and extract those images into a pygame window so you can see the episode as it's being generated. That way you don't have to wait for the episode to end to observe the behaviour. Of course this has to be run interactively / with access to a graphical system, so it's not for parallelisation.

rstrivedi commented 11 months ago

By the way, the approach of using pygame is available in train/render_models.py. You can port the relevant part from there.