-
Describe the bug
----------------------
During the learning procedure, the log file appears to be closed, and therefore the I/O operation throws an error.
Code examples
---------------------
…
-
Hello, thank you very much for this nice library!
I am currently trying to train on multiple environments, so I often need to do `env.get_task(....)`. The problem is, every time I do it a new "Dummy…
-
```shell
$ git clone https://github.com/stepjam/RLBench.git
$ cd
$ wget http://coppeliarobotics.com/files/CoppeliaSim_Player_V4_0_0_Ubuntu18_04.tar.xz
$ tar -xf CoppeliaSim_Player_V4_0_0_Ubuntu1…
-
Hi, I am trying to train a Hierarchical Reinforcement Learning (HRL) agent to solve the tasks (1) open-box, (2) block-pyramid and (3) place_shape_in_shape_sorter using just the wrist-rgb camera view.
…
-
Hi,
I was trying to visualize the observation at running time using matplotlib. However, it seems matplotlib is incompatible with the QT library used in RLBench. So I'm wondering have you ever try …
-
-
Hello!
I want to introduce a new RLBench task (or also override one). How do I accomplish this properly? The only way I can think of now is to rewrite parts of the code in the RLBench package, whic…
Atlis updated
3 years ago
-
I first appreciate the contributors for maintaining this awesome and handy API. It really helps to accelerate my research on robot learning.
I am working on the customized RL environment that gener…
-
Thank you for taking the time to create and maintain RLBench library!
I noticed that the environment hangs for me. I first tried the [RL example from README](https://github.com/stepjam/RLBench#rein…
-
Hello, thanks for making the great repo.
I want to change the default arm to the UR3.ttm which is already provided in CoppeliaSim(V-rep).
Can I just change the codes in environment.py to add the…