Rayhane-mamah / Tacotron-2

DeepMind's Tacotron-2 Tensorflow implementation
MIT License
2.27k stars 905 forks source link

a problem occurs at training model Tacotran-2 #423

Open breadbread1984 opened 5 years ago

breadbread1984 commented 5 years ago

when the wavenet start to train after tacotron, python raise an error message that

AttributeError: type object 'Wrapper' has no attribute '_track_checkpointable'

I print all members of tf.keras.layers.Wrapper with

print(tf.keras.layers.Wrapper.__dict__)

I find no such member within Wrapper.

I am using tf 1.14. any idea on how to solve the problem? Thx.

Ananas120 commented 5 years ago

I begin to train wavenet now and same issue, i just comment the 2 lines and no other issue (i think) :')

breadbread1984 commented 5 years ago

you will get stucked when doing synthesis with model tacotron-2. the trained model can't be loaded properly.

Ananas120 commented 5 years ago

AH... another solution ? ^^'

breadbread1984 commented 5 years ago

pending for it.

Ananas120 commented 5 years ago

Do you try to solve it yourself ? I change 1 thing and add 2 things (not sure they are usefull) but it seems to work now... It loads the model during training and during synthesize but not sure it loads all variables so i will train during 5k steps and see if inference is good (then my solution works) or not (not enough training step or my solution doesn't work)

breadbread1984 commented 5 years ago

I checked out the tf.keras document and found no example on how to use Wrapper layer.

Ananas120 commented 5 years ago

The "wavenet tf-2.0" on your github is a working implementation or it's not finished ? and i don't change that i just change the saver... in the Tacotron model he uses a "saver = tf.train.Saver()" but in WaveNet he uses a "shadow_saver" "sh_saver = tf.train.Saver(shadow_variables)" and i don't understand why because when saving / restoring (training / inference), the shadow_variables are not the same so i change it to use sh_saver = tf.train.Saver() like in Tacotron and it seems good but not test yet (the model is at 2.5k step training now)... but the inference is really slow... 250sec for less than 2sec audio (2 words) and evaluation (at step 2k) was really bad so not sure 5k step is enough to test my method ^^'

breadbread1984 commented 5 years ago

thx for sharing. I will try it.

Ananas120 commented 5 years ago

After 5k step training no result but the evaluation at step 4k was bad too so... i will train it for 15k step and see ^^'

NOTE : i train the models on Colab and after training i must relaunch the environment execution before inference (if i don't i get the NotFoundError but not only for wavenet, for Tacotron-only too so it's normal i think) good luck and can you tell me if the method works ? (if youcan't train it, i added 2 other things but not sure they are useful...)

Arafat4341 commented 4 years ago

Hello everyone! Can someone please tell me why am I getting this IndexError?

Traceback (most recent call last): File "train.py", line 138, in main() File "train.py", line 132, in main train(args, log_dir, hparams) File "train.py", line 52, in train checkpoint = tacotron_train(args, log_dir, hparams) File "/content/drive/My Drive/Tacotron-2/tacotron/train.py", line 399, in tacotron_train return train(log_dir, args, hparams) File "/content/drive/My Drive/Tacotron-2/tacotron/train.py", line 152, in train feeder = Feeder(coord, input_path, hparams) File "/content/drive/My Drive/Tacotron-2/tacotron/feeder.py", line 33, in init hours = sum([int(x[4]) for x in self._metadata]) frame_shift_ms / (3600) File "/content/drive/My Drive/Tacotron-2/tacotron/feeder.py", line 33, in hours = sum([int(x[4]) for x in self._metadata]) frame_shift_ms / (3600) IndexError: list index out of range

Why list index would be out of range I don't understand! I checked the size of self._metadata. I don't know what's going on! Kindly help if you find the reason! Thanks in advance!