Open kuddai opened 1 year ago
You can always put an observer on the scenario observables substrate.timestep.observation[0]['WORLD.RGB']
and extract those images into a pygame window so you can see the episode as it's being generated. That way you don't have to wait for the episode to end to observe the behaviour. Of course this has to be run interactively / with access to a graphical system, so it's not for parallelisation.
By the way, the approach of using pygame is available in train/render_models.py. You can port the relevant part from there.
Hello! In my case generating video
takes ~10 minutes.
Before it was vp90 codec which compressed really well, but took 3+ second per to encode per image. One episode took ~40 minutes to generate. I have swaped vp90 with mp4v codec and now it takes only 0.3 second per frame, but the env takes ~1 second per step. It takes 20 minutes to finish the game and generate final video.
Is there any already available way to record environment faster? Mayber better codec, or make stepping faster? I see that gpu is barely utilized. Maybe record each agent as a step/game state save it into log and then play it in game/environment runner (like in replays in videogames such as Lux-AI competition on kaggle, quake, dota 2, counter-strike, etc.).