-
Normally, when decoding with an LSTM, the output at time t-1 is used as the input at time t. Instead, I want the input at time t to be [output_{t-1} f(output_{t-1})], where f is feedforward neural net…
-
Normally, when decoding with an LSTM, the output at time t-1 is used as the input at time t. Instead, I want the input at time t to be [output_{t-1} f(output_{t-1})], where f is feedforward neural net…
-
Ofir Press, Noah A. Smith
In NMT, how far can we get without attention and without separate encoding and decoding? To answer that question, we introduce a recurrent neural translation model that do…
-
https://github.com/aisu-programming/Preprocessor-for-EEG-Signal/blob/cdb61df0a3cb45c8c833ab35dd8c058948dfceca/utils.py#L53-L58
-
I have a ternarized model where the weights are in { -1. 0. 1 }. Model weights require only 2 bits, and we have no MACs but rather only ACs since all multiplications can be essentially converted to ad…
-
right now, we're using beam search as an off-the-shelf component.
It would be great that:
* the search embeds some kind of patch quality knowledge: the first patch generated should have a better…
-
The idea is inspired by this paper ( https://arxiv.org/pdf/2202.00555.pdf)!
### Quick Summary:
The authors of this claim that they can make the Autoencoder learn error correction routines. The pape…
-
Are NoLACE and DTX supposed to be usable together? I have observed that when turning on NoLACE, as soon as the stream switches to DTX mode, the decoded 0/1-byte packets start generating some noise whi…
-
Thank you for sharing your code.
I am trying to understand few things in here. I have mentioned three questions.
As mentioned in the repo :
`For your convenience, we also provide some precompu…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe your proposed enhancement in detail.
Nilearn provides a very robust solution to univariate decoding w…