-
(First, please check https://github.com/openai/universe/wiki/Solutions-to-common-problems for solutions to many common problems)
Running the agent example in the read me, from what I can tell, some…
-
Dear Danny
Thank you for the great work! I have two questions:
**1- Is it possible to change the “CliffWalk Actor Critic Solution.ipynb” code to implement Actor-Critic for Gym Arari games?**
I b…
-
### Actual behavior
Start universe-starter-agent with:
```
$ python train.py --num-workers 4--env-id flashgames.DuskDrive-v0 --log-dir /mnt/kube-efs/universe-perfmon/usa-flashgames.DuskDrive-v0-2…
-
### Expected behavior
Universe-starter-agent learns neonrace using:
```
python train.py --num-workers 4 --env-id flashgames.NeonRace-v0 --log-dir /tmp/neonrace -m child
```
### Actual behavio…
-
Add the following three columns (horizontally stacked) to the opening banner:
# BIAS_lab_
## **B**ayesian **I**ntelligent **A**utonomous **S**ystems
### About
We are an academic research team in the…
-
Hi Denny
Again, I do appreciate your work!
I was thinking of implementing DQN with **Dyna-Q** Algorithm where the **Q(s,a)** is updated not only by **real** experience, but also by **simulated** ex…
-
### Expected behavior
I followed the [official tutorial](https://github.com/openai/universe#run-your-first-agent) try to run my first agent. It failed when I come to "env.configure(remotes=1)"
###…
-
I'm starting to work on a framework for reinforcement learning in https://github.com/tbreloff/Reinforce.jl, and I was thinking it would be nice to include the core abstractions in LearnBase so that I …
-
(First, please check https://github.com/openai/universe/wiki/Solutions-to-common-problems for solutions to many common problems)
### Expected behavior
Expected demo app to open and run the game
…
-
(First, please check https://github.com/openai/universe/wiki/Solutions-to-common-problems for solutions to many common problems)
### Expected behavior
I am running an Ubuntu 16.04 machine on AWS…