-
Dear @erniejunior,
I been trying to trace how the LSTM policy works (with ACER) and its rather confusing. My understanding that the n_steps = lstm sequence length, and so each batch (n_env * n_ste…
-
See the `feature/fix_lstm` branch for [a test](https://github.com/hill-a/stable-baselines/blob/feature/fix_lstm/tests/test_lstm_policy.py) which [fails](https://travis-ci.com/hill-a/stable-baselines/b…
-
Thanks for your contributing such a great work.
It seems that **run_bi-lstm-cnn-crf3.sh** is different from the official **run_bi-lstm-cnn-crf.sh**, and the former is missing in the branch. Could you…
-
I'm trying to use shap for a lstm model with the below architecture:
```
_________________________________________________________________
Layer (type) Output Shape …
-
AdditiveAttention cannot output weights. `weights, context_vector = AdditiveAttention(name='attention')([state_h, lstm])` should be modified to `context_vector = AdditiveAttention(name='attention')([s…
-
I'm getting this error when trying to use `LSTMModel`:
```
ValueError: in user code:
/home/jsadler/miniconda3/envs/rgcn1/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.…
-
## 🐛 Bug
Torch script errors on nn.LSTM due to return type mismatch.
## To Reproduce
run the following:
```python
from __future__ import print_function
import torch
class TestModule(t…
-
I'm having an issue where I'm able to train a single layer LSTM without problem but adding a second layer results in a ValueError:
Single layer example:
```
net = tflearn.input_data([None,1000,4]…
-
hi,
i was use the trail dataset like [[x1,y1], [x2,y2]] and change the shape to [[x1,x2,x3], [y1,y2,y3]] length about 150 and use LSTM AE to train it . i want to use this model to Implement represent…
-
## 🐛 Bug
when use numpy.random.randn to init h0 and c0, the output will have nan value; but use torch.randn can train correct, and will not have nan value.
## To Reproduce
Steps to reproduce …