Open annisaabuhamid opened 6 years ago
I'm using that same gpu and it's working fine for me without changing the batch size or changing anything with the dataset.
why mine stuck when load the train1.py? i waited more than one hour and still dont get any result? @VictoriaBentell
I'm assuming you're using the TIMIT dataset. Check line 49 in hparams.py and make sure that you have the correct path. Also make sure that the files in your dataset are all .wav instead of .WAV. I also had to move everything out of the 'raw' folder to have the correct path.
Put this into the deep-voice-conversion folder, change it to rename_WAV.py (because I can't upload python files here) and run it to change all of the .WAV files to .wav
Then change os.chdir('./datasets/timit/TIMIT/TRAIN')
to os.chdir('./datasets/timit/TIMIT/TEST')
in line 2 and run it again, so it does this for both the train and test folders.
Hi, thanks for being so responsive. Even after renaming the files, I still have the same problem. Also i triple-checked if the destination folder is correct. Do you have any idea?
@hoenickf do you think you can post your hparams.py?
I did not make any changes at all. So it would be kind of boring to look at :)
This is a great repo and all, but it would be nice if it gave an error when something is wrong. Maybe the script I wrote sucks and didn't change all of the file names? Otherwise if you pulled everything out of the 'raw' folder, I'm not sure what could be the case. :/
@VictoriaBentell When you say its working without changing the dataset size and batch size, how much time does it mean to finish train2.py.
@VictoriaBentell I am using Tesla K80. train2.py points to arctic/slt dataset and I have reduced the number of epochs to 500 also, but I have kept other parameters intact. But it is taking a lot of time to process. Can you tell me if there is something that needs to be checked?
I'm having problems with train2.py as well, since it doesn't seem to be making any progress at all my setup. It just fluctuates between high and low loss all day long without any clear pattern. :/
@VictoriaBentell Thats same here for me. Hope someone who has executed the whole process can share his views over here.
have yall checked your nvidia-smi
to make sure the GPU is actually running? in my experience it will default to tensorflow
instead of tensorflow-gpu
unless i uninstall tensorflow
and reinstall tensorflow-gpu
and then it seems to work
ill try to PR a dockerfile soon
hello @andabi , does this model need to run on a higher GPU. My gpu is nvidia geforce gtx1080 with 8G. is it enoug to run? if its doesnt can i change the batchsize and decrease the dataset. Thank you