-
Description
We need to define what analytics we want to capture from the beginning and set up an analytics tool to do so. We may get some logging and analytics out of the box with our hosting solutio…
-
-
In #126 it is mentioned that most of the ability to clone voices lies in the encoder. @mbdash is contributing a GPU to help train a better encoder model.
* Increase the number of hidden layers to 7…
ghost updated
2 years ago
-
-
Hello,
I think this is linked to https://github.com/custom-components/ble_monitor/issues/167, but since I upgraded to Home Assistant OS 5.12 yesterday, I have trouble with my bluetooth temperature …
-
Hi, I have some questions about this work.
In function gen_from_file of gen_wavernn.py, we need to input speaker_embedding extracted by wav. But actually, we only get mel-spectrogram for vocoder in T…
-
I am trying to convert FastSpeech2 to ONNX with `tf2onnx` and when I run the model I get an error with an unsqueeze layer - Does anyone have insight on this?
Convert FastSpeech2 Keras -> Tensorflow…
xDuck updated
3 years ago
-
This issue arises because there's discussion in MPEG regarding 14496-30 about defining the correct semantics of processing sequences of documents each in a wrapper, where the draft text explains it in…
-
Hi, himajin2045,
Can you please give the training steps of each net? The config in hparams.py, 1e12, seems far more then enough. Thank you
-
### The problem
Hi,
My Envisalink integration is unreliable - *most* of the time when I tell the system to arm or disarm it sits there and does NOTHING. I have used the alarm keypad more times tha…