-
@rafaelvalle
https://github.com/NVIDIA/tacotron2/blob/master/model.py, Line 354-356
`cell_input = torch.cat((self.decoder_hidden, self.attention_context), -1)
self.attention_hidden, self.attent…
-
the plot like
![screenshot from 2018-05-14 15-26-33](https://user-images.githubusercontent.com/23501322/39983782-7a2c9ac4-578b-11e8-9ae2-9b3d9f0a668e.png)
the train is very slowing
-
Hello Tiberiu,
I'd love to test TTS-Cube, but unfortunately now i don't have access to a good GPU (and i don't think i could train a TTS on a laptop with a 940mx), do you have a pretrained english …
-
## Issue description
The snippet below hangs with PyTorch 0.4 but successfully finishes with PyTorch 0.3.1.
I found that removing `model = nn.DataParallel(model).cuda()` allows the snippet pass.
…
-
## Data feeding
- [ ] using parameters in hparam.py to reduce independent parameters (3rd)
- [ ] muLaw quantization to preserve more important information (4th)
## Model
- [x] modifying the w…
-
This is an umbrella issue to track progress for my planned TODOs. Comments and requests are welcome.
### Goal
- [x] achieve higher speech quality than conventional vocoder (WORLD, griffin-lim, e…
-
Quote from `README.md`:
> Make sure you follow the instructions in the server README before you build your image so that the server can find the model within the image.
In the [server README](ht…
-
https://github.com/r9y9/wavenet_vocoder/blob/2b557d4cbacef52bc3441fc4f2c54b1351ae9df4/train.py#L420-L433
The code above pads to batched samples to fit their sequence length. When `is_mulaw_quantize…
-
Hi, @m-toman
After forking from your repository and improving it a bit, I tried running the program but in the following part I get an error by all means.
https://github.com/h-meru/Tacotron-WaveR…
-
Hello,
Below are a few examples of my output as well as a few issues I've run into. Hopefully, this post can also help others.
So I've trained tacotron using the LocationSensitiveAttention foun…