openai / universe-starter-agent

A starter agent that can solve a number of universe environments.
MIT License
1.1k stars 318 forks source link

How to run NeonRace locally instead of VNC? #134

Closed GokulNC closed 6 years ago

GokulNC commented 6 years ago

Referencing this issue: No tangible results with pong or NeonRace within VNC environment, my agent doesn't seem to have learnt anything at all since a day after training using 4 workers.

Note: In my tmux console, I see the reaction_time as None. A sample:

universe-yLSFIj-0 | [2018-02-07 05:57:03,947] [INFO:universe.wrappers.logger] Stats for the past 5.00s: vnc_updates_ps=32.6 n=1 reaction_time=None observation_lag=None action_lag=None reward_ps=0.0 reward_total=0.0 vnc_bytes_ps[total]=217676.1 vnc_pixels_ps[total]=114560.4 reward_lag=None rewarder_message_lag=None fps=58.58 universe-yLSFIj-0 | [2018-02-07 05:57:03,989] [INFO:universe.rewarder.remote] [Rewarder] Over past 1.05s, sent 31 reward messages to agent: reward=651.0 reward_min=21.0 reward_max=21.0 done=False info={'rewarder.vnc.updates.bytes': 7202, 'rewarder.vnc.updates.n': 1, 'rewarder.vnc.updates.pixels': 2394}

I'm using Tensorflow 1.5.0, is that compatible with the current repo?

Also, I'm facing this issue: Neonrace indefinitely using only ArrowUp (eventhough I use the latest Gym & Universe)

And, how do I run the game locally, instead of VNC, like how DeterministicPong is run locally (the non-VNC one) as mentioned in the repo README?