-
## Describe the bug
`SACLoss` has flawed checks for determining the nature of `vmap_randomness`. Therefore, stochastic modules cannot be used in constituent networks.
## To Reproduce
Steps to…
-
### What happened + What you expected to happen
When starting a pod on a kubernetes cluster through kubeflow (with a jupyter lab server running), starting a training with rllib does not work. As I am…
-
Since there is a version of SAC for discrete actions https://github.com/pytorch/rl/blob/2461eb20d21b79a410e01aed71c26b77712a30d8/torchrl/objectives/sac.py#L792 I was wondering what would be the proces…
-
微博内容精选
-
### 🐛 Bug
I'm training a SAC policy in a Mujoco's Humanoid environment for some iterations. After finishing training, I save the model, to resume training later.
However, when restarting trainin…
-
**Describe the bug**
I am trying to test a batch/transform job locally on my computer but I am getting the following error at the end of the **transform** method.
"RuntimeError: Failed to run: ['d…
-
报错:在第一阶段训练完成后,选出模型用于train_rl.py时,报错。train_rl用作者训练好的模型,运行正常,不会报错
-
Thank you for your work on this cool repo! It is really useful for my research :)
### 🐛 Bug
Why is `self.key` always the same after each `self._train` call? More precisely, why is this part of th…
-
Hello dear Dr. Vikash,
I hope you and everyone in your family are doing well! For conducting Reinforcement Learning experiments I have been using the Ray API and more specifically the implemented a…
-
### 🐛 Describe the bug
Reported by @edeyneka in the discussion board in [this topic](https://discuss.pytorch.org/t/adversarial-training-with-torch-compile/168933).
Ekaterina used the [DCGAN tutorial…