Open aleph1 opened 1 year ago
It looks like it is synthesizing with CPU instead of GPU. You can define the device via the --device
parameter, e.g., --device "cuda:0"
. Per default it takes the GPU if it is available, otherwise it takes the CPU. I wonder why the GPU is not recognized.
I am using tacotron-cli and waveglow-cli on Google Colab, and am experiencing slow mel inference with waveglow-cli. The following is the code I am executing and the resulting log with the inference duration in bold. Given the generation of the mel takes 0 seconds, I am wondering if the mel inference is occurring on the CPU vs GPU and whether this is a bug. If I run a different Colab where I install waveglow directly I do not experience this issue. Any inout would be appreciated.
And this is the log.
(DEBUG) Loading checkpoint... (DEBUG) Loading text. Inferring... Checkpoint learning rate was: 1e-05 Using random seed: 7827. (DEBUG) Speaker: Linda Johnson (sdp) Inference: 0%| | 0/1 [00:00<?, ? lines/s]Line 1: Skipped inference because line is already synthesized! Inference: 0%| | 0/1 [00:00<?, ? lines/s] Done. Total spectrogram duration: 0.00s Written output to: '/content/example/text' Everything was successful! Written log to: /tmp/tacotron-cli.log Using random seed: 4663. Loading model '/content/example/checkpoint-waveglow.pt'... Loaded model at iteration 580000. /usr/local/lib/python3.10/dist-packages/waveglow/model.py:36: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release. The boolean parameter 'some' has been replaced with a string parameter 'mode'. Q, R = torch.qr(A, some) should be replaced with Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at ../aten/src/ATen/native/BatchLinearAlgebra.cpp:2425.) W = torch.qr(torch.FloatTensor(c, c).normal_())[0] Inferring: 0%| | 0/1 00:00<?, ? mel(s)/s Loading mel from /content/example/text/1-1.npy ... (DEBUG) Inferring mel... (DEBUG) Saving /content/example/text/1-1.npy.wav ... Inferring: 100%|█████████████████████████████████████████████████| 1/1 [01:07<00:00, 67.64s/ mel(s)] Done. Written output to: /content/example/text Everything was successful! Written log to: /tmp/waveglow-cli.log