syang1993 / gst-tacotron

A tensorflow implementation of the "Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis"
368 stars 110 forks source link

training stops many seconds to create new queue of data #50

Open Adibian opened 2 years ago

Adibian commented 2 years ago

Hi When I start training, time of train of any step is good and it uses GPU but after 32 steps (if _batches_per_group=32 in datafeeder) GPU utilization going to 0 and after many seconds the queue of data becomes ready and again training starts. I saw datafeeder.py and it uses threading for queue of data but why it stops training? How can I increase GPU utilization? I increased _batches_per_group in datafeeder.py but it causes training stops more seconds. Following picture show that if I set _batches_per_group=128 training stops for 89 seconds!

Capture