Kyubyong / expressive_tacotron

Tensorflow Implementation of Expressive Tacotron
197 stars 33 forks source link

no checkpoits after running train.py #8

Open aishweta opened 5 years ago

aishweta commented 5 years ago

@Kyubyong I'm able to run train.py successfully, but there is no checkpoints available in logdir, so unable to run synthesize.py

I have changed parameters such as: lr = 0.9 # Initial learning rate. logdir = "logdir" sampledir = 'samples' batch_size = 16

and also i made some changes in train.py

if name == 'main': g = Graph(); print("Training Graph loaded")

sv = tf.train.Supervisor(logdir=hp.logdir, save_summaries_secs=60, save_model_secs=0)
with sv.managed_session() as sess:

    if len(sys.argv) == 2:
        sv.saver.restore(sess, sys.argv[1])
        print("Model restored.")

    #while 1:
        for _ in tqdm(range(g.num_batch), total=g.num_batch, ncols=70, leave=False, unit='b'):
            _, gs = sess.run([g.train_op, g.global_step])

            # Write checkpoint files
            if gs % 100 == 0:
                sv.saver.save(sess, hp.logdir + '/model_gs_{}k'.format(gs//100))

                # plot the first alignment for logging
                al = sess.run(g.alignments)
                plot_alignment(al[0], gs)

        #if gs > hp.num_iterations:
            #break

print("Done")