-
OpenAI Gym version 0.8.1 running on Ubuntu Linux with TensorFlow 1.0.1
I have observed a strange behaviour when playing Breakout-v0. I already know that the action is repeated randomly by Gym betwe…
-
Hi there!
I am working in a minimap for sentryDefense function explotation and I am having problems with triggers init configuration .
Are you a experienced game map creator that could help me w…
-
Hello! Can you please clarify on what you meant in the README by "DQN+CTS after 80M agent steps using 16 actor-learner threads"? DQN isn't a distributed algorithm, it uses a single thread. Did you mea…
-
(First, please check https://github.com/openai/universe/wiki/Solutions-to-common-problems for solutions to many common problems)
### Expected behavior
I'm expecting the VNC Server to come up and o…
-
(First, please check https://github.com/openai/universe/wiki/Solutions-to-common-problems for solutions to many common problems)
### Expected behavior
I'm running the "Run your first agent" exampl…
-
### DISCLAIMER
I'm new to openai and I'm currently just try to get example projects to run but so far it's not working.
### Expected behavior
I want the game to render so I can see what's happeni…
vruss updated
6 years ago
-
Hi,
This is fantastic--thanks so much for putting this together and out into the community!
I'm playing around with creating a custom Environment--and I'm trying to use Keras' Functional API that pe…
-
### Actual behavior
Starter universe-starter-agent with:
```
$ python train.py --num-workers 8--env-id flashgames.NeonRace-v0 --log-dir /mnt/kube-efs/universe-perfmon/usa-flashgames.NeonRace-v0-2…
-
Hi, I noticed that when you calculate reward you don't update last_state to current_state. The only place where last_state is updated is the reset method.
This means that you don't return one-step …
-
When deciding on what transaction fee to charge, a connector will take into consideration how much money it wants to earn from each transaction (an argument for charging higher fees), as well as how m…