NVIDIA / flowtron

Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
https://nv-adlr.github.io/Flowtron
Apache License 2.0
887 stars 177 forks source link

Which torch version to use? #160

Open naveed81 opened 11 months ago

naveed81 commented 11 months ago

This is my current configuration: Ubuntu 20.04, 16GB RAM Python 3.8 matplotlib==3.3.2 numpy==1.19.2 inflect==4.1.0 librosa==0.6.3 scipy==1.5.2 Unidecode==1.0.22 pillow==9.5.0 tensorboardX==2.6.2.2 torch==1.8.1

With other torch versions inference is not working (no sound, or only noise) or getting nan during training. With torch 1.8.1 not initially but after training custom data with libritts2p3k model for about 1000 iterations, I start getting nan

I checked other threads and learned that getting nan is an issue with torch version. So can you please tell me the correct torch version to use?

Below is my nvidia-smi output: +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 535.54.03 Driver Version: 535.54.03 CUDA Version: 12.2 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 Tesla T4 On | 00000000:00:1E.0 Off | 0 | | N/A 80C P0 46W / 70W | 10405MiB / 15360MiB | 0% Default | | | | N/A | +-----------------------------------------+----------------------+----------------------+

+---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 8531 C python 10402MiB | +---------------------------------------------------------------------------------------+