-
### What happened + What you expected to happen
Hello dear members of the Ray Team,
I have been using RLlib and Ray tune for conducting Reinforcement Learning experiments on a plethora of differen…
-
### 🐛 Bug
HerReplayBuffer cannot handle VecEnv with more than 1 environment.
It raises an error in the `add()` function when called with the values returned from the `step()` function of such a Ve…
-
想请教下,因为想训练下detectors在aitodv1上的结果,所以使用[aitodv2_detectors_rfla_kld_1x.py](https://github.com/Chasel-Tsui/mmdet-rfla/blob/main/configs/rfla/aitodv2_detectors_rfla_kld_1x.py)这一config,修改config为'../_base_/d…
-
Hello, there is a error while it updated the actor network. After I changed all the `nn.ReLU(inplace=True)` to `nn.ReLU(inplace=False)`, and this error still exists. Anyone else meets issue?
My pa…
-
### 🚀 Feature
A clear and concise description of the feature proposal.
At present the `predict` method in the `BasePolicy` class contains quite a lot of logic that could be reused to provide sim…
-
Thank you for this wonderful work!!!!
Unfortunately, when running this program (verify_image_on_cuda.py) with a screenless remote server (Ubuntu 18.04), I got the some errors. Further, I define the e…
-
### Question
How can gradients be accessed during training?
### Additional context
I currently use Stable Baselines 3 to train a reinforcement learning SAC agent, on a custom environment. I w…
-
Hi,
I am using PyTorch version 1.6 to run this script: ` bear/examples/sac.py`. The script fails with the following error:
```
Traceback (most recent call last):
File "sac.py", line 111, in
…
-
-