codertimo / BERT-pytorch

Google AI 2018 BERT pytorch implementation
Apache License 2.0
6.15k stars 1.3k forks source link

Is it possible to train BERT? #3

Open codertimo opened 5 years ago

codertimo commented 5 years ago

Is it possible to achieve the same result as the paper in short time? Well.. I don't have enough GPU & computation power to see the enough result as google ai.

If we can't train the full corpus as the google, then how can we prove that this code is verified? Training 256M size corpus without Google AI class gpu computation is nearly, impossible for me.

If you have any thought(reducing the model size) please let me know!

briandw commented 5 years ago

The authors plan on releasing the full pre-trained model in a few weeks. There will be the task of loading their model weights into PyTorch. Perhaps ONNX will work for getting the weights out of TF and into PT?

Once the weights have been loaded, it should be possible to validate the finetuneing results.

codertimo commented 5 years ago

@briandw Well I sent the email to author, and they noticed me the same thing. Well I agree that we can generate the pytorch module using ONNX, but it might be impossible to load weight on this model as same as tf model architecture. So do you have any idea about this?

briandw commented 5 years ago

I can try to import the Tensor2tensor model into PT. https://github.com/tensorflow/tensor2tensor It should be the same process.

briandw commented 5 years ago

@codertimo Should the goal be to train BERT from scratch or to fine-tune the model? I'd say that scratch training isn't realistic right now. Fine-tuneing shouldn't be that resource intense and will be very valuable.

codertimo commented 5 years ago

@briandw Thank you for your advice. Currently my goal is training from the scratch with smaller model which can available to train on our GPU environment. Cause I wanna keep this implementation for someone need training on there specific domain or language.

But as you said, moving trained model on tf to pytorch is another goal of this project too. So I liked to implement the transfer code for loading pretrained model too. Well I'll make a plan and notice you guys when the pretrained model and official BERT implementation is came out.

jacobrxz commented 5 years ago

Is this code support distributed training? I mean multi-computer with multi-gpu...

BerenLuthien commented 5 years ago

@codertimo Did you already trained this model on small dataset ? If yes, would you share some info about it ? For example, what if we use p2.8xlarge GPUs to train on 1M dataset from scratch (Thanks for wonderful work BTW)