Open Hjwjames opened 6 years ago
ok...I solve that problem now... but now I find the training stop at 0b [00:00 , ?b/2]
You don't have Timit datasets?
You can get the trained weights here so you don't have to worry about Train1.
mfcc, ppg = get_batch(model.mode, model.batch_size)
File "G:\code\python\myfile\ASR\deep-voice-conversion-master\data_load.py", line 205, in get_batch
mfcc, ppg = list(map(_get_zero_padded, list(zip([get_mfccs_and_phones(w, hp_default.sr) for w in target_wavs]))))
File "G:\code\python\myfile\ASR\deep-voice-conversion-master\data_load.py", line 205, in
execfile(filename, namespace)
File "G:\anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 102, in execfile Traceback (most recent call last): File "
", line 1, in File "G:\anaconda\lib\site-packages\spyder\utils\site\sitecustomize.py", line 880, in runfile
raise NoBackendError() audioread.NoBackendError It raise NobackendError...... @JiaYK @VictoriaBentell Do you know anything about it? Any comments would be much appreciated..... Thanks for your answer... I haven't put my Timit dataset in it last time
According to this thread you might want to try installing ffmpeg.
It's better to work under Linux. Tensorflow-gpu does not support Python2.7.
@JiaYK @VictoriaBentell Dear friends, it this means I success training now? the epoch and step goes strange
INFO:tensorflow:Restoring parameters from ./logdir/default1/train1\epoch_24_step_24 Model loaded. mode: train1, model_name: epoch_24_step_24 acc: 0.0425952 loss: 3.95336
WARNING:tensorflow:From G:\code\python\myfile\ASR\deep-voice-conversion-master\models.py:73: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use argmax
instead
INFO:tensorflow:Restoring parameters from ./logdir/default1/train1\epoch_24_step_24
Model loaded. mode: train1, model_name: epoch_24_step_24
acc: 0.0476528
loss: 3.8893
WARNING:tensorflow:From G:\code\python\myfile\ASR\deep-voice-conversion-master\models.py:73: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use argmax
instead
INFO:tensorflow:Restoring parameters from ./logdir/default1/train1\epoch_26_step_26
Model loaded. mode: train1, model_name: epoch_26_step_26
acc: 0.0535262
loss: 3.91394
WARNING:tensorflow:From G:\code\python\myfile\ASR\deep-voice-conversion-master\models.py:73: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use argmax
instead
INFO:tensorflow:Restoring parameters from ./logdir/default1/train1\epoch_26_step_26
Model loaded. mode: train1, model_name: epoch_26_step_26
acc: 0.0459833
loss: 3.98388
Those warnings are normal, so if you just let it keep training you should be fine.
How did you solve the "argument" problem in the first case? I am facing the same issue.
@VictoriaBentell did y manage to get the whole pipeline working (with convert) ? Is 12 epochs of train1 enough ?
For those still struggling with the initial problem:
The "case" the script asks for is your name for the case, passed as an argument. e.g.
python train1.py test
where "test" can be anything. I suggest naming it after the dataset you use, such as
python train1.py TIMIT
if you're using the TIMIT dataset.
I'm surprised this isn't in the docs. It was confusing. Thanks to @CIDFarwin and https://github.com/andabi/deep-voice-conversion/issues/35#issuecomment-387593936
test what erver
Traceback (most recent call last):
File "train1.py", line 76, in
You can get the trained weights here so you don't have to worry about Train1.
thank you
reading hparams.py and u can find your answer
default_hp = load_hparam(default_file) user_hp = load_hparam(user_file) hp_dict = Dotdict(merge_dict(user_hp[case], default_hp) if case in user_hp else default_hp)
here the author merged two dicts: user_hp[case] & default_hp , when u dig into those .yaml files in hparams dir &merge_dict() function in hparams.py ,u can find that :case may refer to different attempt on training hyperparamters(case just like a Fine-tuned version number) ,as u can see the merge_dict() fn replace params in default_hp with same named params in user_hp[case]
raise NoBackendError() audioread.NoBackendError It raise NobackendError...... @JiaYK @VictoriaBentell Do you know anything about it? Any comments would be much appreciated..... Thanks for your answer... I haven't put my Timit dataset in it last time
i meet the same problem ,may i ask u for the answer?
my code with train1.py stunk here, anyone konw how to solve it?
You can get the trained weights here so you don't have to worry about Train1.
hello,do you still have the trained weights from drive.google.com?I can't open the url and also can't download the weights.Could you send the weights to me or download then send to me?thanks.@VictoriaBentell
You can get the trained weights here so you don't have to worry about Train1.
hello,do you still have the trained weights from drive.google.com?I can't open the url and also can't download the weights.Could you send the weights to me or download then send to me?thanks.@VictoriaBentell
It works for me in an incognito window on Windows 10 in Google Chrome. It's been a while but I think the file you should be looking at is "checkpoint"
I face this problem for long time....