-
This line seems redundant because we overwrite value of q in the next line anyway ?
https://github.com/ku2482/sac-discrete.pytorch/blob/master/sacd/agent/sacd.py#L118
```
with torch.no_grad():
…
-
## Problem statement
When we compare the Pytorch translation with the Tensorflow translation we see that the PyTorch version is not training:
### Pytorch results
![image](https://user-images.gi…
-
Hi!
I am trying to run the code on the default configuration and after a while the program results in the following error:
```shell
Traceback (most recent call last):
File "code/main.py", li…
-
### What is the problem?
*Ray version and other system information (Python version, TensorFlow version, OS):*
SAC-pytorch is not working with atari! Seems related to bug in pytorch.
python …
-
Hello. I am running SAC on a Gym env created using the ``make_vec_env`` function (which also wraps the env with a Monitor wrapper)
**Question 1**
1. I want to log the mean episode reward and len…
-
### What is the problem?
It is not clear to me if the variables of a state-preprocessor, i.e. the core "forward" layers, are assigned to a gradient.
Assume I use sac (or ddpg) with a VisionNet a…
-
Hi, first of all, great implementation!
I found your github repo when looking for a soft Q learning implementation.
I was wondering how you implemented updating policy via Amortized SVGD.
It look…
-
Hi,
Thanks for sharing this great work. I successfully run the evaluating code for max-deeplab but have issues during training. I use two P40 GPU to sanity check the training code with batchsize=2. I…
-
Hi, I am using zoo to optimise the parameters for SAC with a customised env. The code I used was
```python
python3 train.py --algo sac --env FullFilterEnv-v0 --gym-packages gym_environment -n 50000…
-
**Bug Description**
I am trying to implement DetectoRS with own custom dataset, and I am getting an error which is
_AttributeError: 'NoneType' object has no attribute 'squeeze'_
**Reproduction…