-
i got negative loss when training wavenet with THSCH-30 dataset.
```
Wavenet Train
###########################################################
Checkpoint_path: i:/data/voice/logs-Tacotron-2\…
-
**Objective**: Create exercises where students have to complete miniGPT code
- [ ] Decide on exercise structure. Normally we have
1. warm-up: some easy example to complete to gain knowledge abo…
-
It would be convenient to allow the encoder [output_size](https://github.com/CUNY-CL/yoyodyne/blob/master/yoyodyne/models/modules/lstm.py#L99) to be different from the TransformerDecoder embedding siz…
-
Hello @A-Jacobson.
Great work with your implementation and more importantly with you clear representation of the model in your README (100% better that the one presented in the paper x) ).
So I …
-
Here I want to collect some things to be done to speed up eager-mode execution. Most of it did not really matter in graph-mode execution when those extra things are only executed once. That are extra …
-
I'm getting this error, when I train tacotron
I already placed tacotron batch_size in 4, but still
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more info…
-
Could you please upload a example script?
inkzk updated
3 years ago
-
Hi
Is there any possible way to accelerate the code ?
I am running a training data with only 300 vocabulary size and 3w training instances with maximum length 50, but it takes almost 1 hour to fini…
-
参考资料:
* [paper: Transformer-based Encoder-Decoder](https://huggingface.co/blog/encoder-decoder),一篇博客文章,详细介绍了RNN-based Encoder-Decoder与Transformer-based Encoder-Decoder的区别。
* [tutorial: harvard-code …
-
Hi,
I have a simple `LSTM` architecture as shown below :
``` lua
nn.Sequential {
[input -> (1) -> output]
(1): nn.Sequential {
[input -> (1) -> output]
(1): nn.Sequencer @ nn.Recursor @ …