-
I am trying to build a ResNet model with a Convolutional LSTM between successive layers. I tile my input tensor `timesteps` number of times and reshape the input tensor to [n,t,h,w,c] to pass as the i…
-
Hi, I am getting the above error when executing the following line in chatbot.ipynb:
model = Chatbot(config.LEARNING_RATE,
config.BATCH_SIZE,
config.ENCODING_EMBED…
-
### Feature
I kindly request the addition of support for the Kronecker-Factored Approximate Curvature (KFAC) optimization technique in LSTM and GRU layers within the existing KFAC Optimizer. Curren…
-
```
function eval_model(x)
out = simple_rnn.(x)[end]
Flux.reset!(simple_rnn)
out
end
loss(x, y) = abs(sum((eval_model(x) .- y)))
ps = Flux.params(simple_rnn)
opt = Flux.ADAM()
printl…
-
Thanks for these tutorials.
They are clear and easy to go through.
As I am trying to get into building my own models with these tutorials I was trying to do the exercises in the char-rnn-classificat…
-
### 🐛 Describe the bug
I am currently experimenting around `torch.compile` for use with a custom, pure-Python LiGRU implementation, which is a variant of the typical GRU architecture. This is not a…
-
Dear Friend.
I want to ask a question from you. !!
in model, why is this necessary?
assert len(num_hidden_units) == num_rnn_layer
and why can't we change the number of layers with this exc…
-
As we know, the input and output tensor support ```batch_first``` for RNN training, but as for hidden state, the tensor shape is forced to be ```(num_layers * num_directions, batch, hidden_size)```, e…
-
...\lib\site-packages\torch\nn\modules\rnn.py:62: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.…
-
i cannot find a way of combining the two examples from the documentation.
E.g., i want to decode a 13-dimensional state vector `s0` into a sequence of length 11. Each element of the sequence is a sof…