-
Hello,
First, I want to thank you for the great framework.
Second, the current implementation of transformer model uses non-autoregressive, correct ?
If I want to switch to autoregressive trans…
-
Hi,
I want to train a model with transformer encoder and sru decoder? I notice that I am able to set --transformer-decoder-autoreg to rnn. However, how can I set the decoder type to be sru? The decod…
-
More information can be found at https://github.com/LSSTDESC/obs_strat and data data files at nersc in this directory: /global/project/projectdirs/lsst/survey_sims/
Here are the database files suit…
-
Hi,
just tried running the LJ dataset on characters (not phonemes as I would like to have a comparison to an existing model I have) with r=1 and your BatchNorm version (latest dev-tacotron2 branch …
-
**Yiming Wang**, Fei Tian, **Dongjian He**, Tao Qin, ChengXiang Zhai, Tie-Yan Liu. 2019. Non-Autoregressive Machine Translation with Auxiliary Regularization. In Proceedings of AAAI 2019.
The first…
-
# Next paper candidates
Let's propose papers to study next! All papers mentioned in the comments of this issue will be listed in the next vote.
## Last session runner-up(s)
- [Import2vec: learni…
-
I found the transformer usually get BLEU score around 27-28 for WMT14 EN-DE. However, in the paper, the AR model only gets around 24? I am curious about what is the AR model. Thanks!
-
I am not following how to use the pre-trained model to speak custom texts? i.e. how could I have the pre-trained model express "hello world"?
-
For better support, please use the template below to submit your issue. When your issue gets resolved please remember to close it.
- **Tell us about your operating system (Linux/macOS/Windows), Pyt…
-
Hello everyone.
I updated today TTS (master branch) and faced with multiple issues.
The first one, I had to add
_"gradual_training": [[0, 7, 32], [10000, 5, 32], [50000, 3, 32], [130000, 2, 16]…
vcjob updated
4 years ago