-
Here I note several issues in the current `ReplayBuffer`, including the points raised by @younik in #193.
1. **Types of objects**, (#193): Currently, `ReplayBuffer` can take `Trajectories`, `Transi…
-
## Describe the bug
When yielding from a `SyncDataCollector` that uses a standard `Actor` (not a random policy) and `init_random_frames=0`, it crashes.
```python
policy = Actor(
age…
-
I modified the [getting started](https://github.com/facebookresearch/agenthive/blob/vmoens-patch-1/GET_STARTED.md) example to run torchrl with robohive. Here's the modified example,
```
import tor…
-
## Describe the bug
The device of `info['_weight']` doesn't match the storage device.
## To Reproduce
```python
# From documentation
from torchrl.data.replay_buffers import ReplayBuffer, La…
-
### Environment
OS: Windows 11
Python : CPython 3.10.14
Torchrl Version : 0.5.0
PyTorch Version : 2.4.1+cu124
Gym Environment: A custom subclass of EnvBase (from torchrl.envs)
The project I'm …
-
Hi,
I am wondering do you have plan to integrate TorchRL (https://github.com/pytorch/rl) into this framework?
We have TorchRL, rlkit, elegantRL, and stable-baselines3. Which framework would be t…
-
@vmoens mentioned this recently in a talk on TorchRL pain points but I didn't see an existing issue.
We can actually support loss.backward() in Dynamo. There are a couple of cases (ranging from eas…
-
## Describe the bug
I have observed a considerable decrease in policy performance after the recent PyTorch 2.5.0 update. The decrease in performance replicates when training with A2C, REINFORCE and…
-
### What happened + What you expected to happen
When calling ray.remote on torch IterableDataset, I get ValueError: no signature found for builtin type .
Which did not happen in previous versions of…
-
Hi:
When I used the command `python starter/ppo_locotransformer.py --config config/rl/static/locotransformer/thin-goal.json --seed 0 --log_dir log --id 0`, I encountered this error:
```
Traceback (…