rizkiarm / LipNet

Keras implementation of 'LipNet: End-to-End Sentence-level Lipreading'
MIT License
628 stars 224 forks source link

GPU training #85

Closed Zhaans closed 4 years ago

Zhaans commented 4 years ago

Hello,

I am trying to train overlapped_speakers on GPU. However I cannot see any GPU usage, although I installed tensorflow-gpu and can see message:

> Using all available GPUs.
> Using TensorFlow backend.

On CPU everything is fine. Any suggestions? Thanks!

RifqiAW commented 4 years ago

Try running the train.py directly without using the script

python3 train.py

It worked for me.

Zhaans commented 4 years ago

Try running the train.py directly without using the script

python3 train.py

It worked for me.

Thanks for your answer. How much GPU memory does it take? And for what scenario are you training?

RifqiAW commented 4 years ago

10 gb I think? I used batch size 64, just reduce the size if it doesn't fit. And I tested it for the unseen_speakers scenario. And also just a heads up if you got memory leaks during training then you might want to implement your own generators or at least tweaked the current one.

Zhaans commented 4 years ago

10 gb I think? I used batch size 64, just reduce the size if it doesn't fit. And I tested it for the unseen_speakers scenario. And also just a heads up if you got memory leaks during training then you might want to implement your own generators or at least tweaked the current one.

Ok, got it, will try it. Thanks a lot for your quick reply!

njan-creative commented 3 years ago

@rizkiarm Can you let me know the hardware used to train this model and the training time..