richarddwang / electra_pytorch

Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
324 stars 41 forks source link

Training from scratch with other datasets (other languages) #3

Closed dumitrescustefan closed 4 years ago

dumitrescustefan commented 4 years ago

Hi! Thanks @richarddwang for the reimplementation. For some time I am getting less than desired results with the official huggingface electra implementation. Would you consider adding support for pretraining on other datasets (meaning other languages)? Right now it's just the wiki and books from /nlp.

Thanks!

richarddwang commented 4 years ago

Hi @dumitrescustefan ,

  1. Could I ask you what do you mean electra implementation ? Do you mean the model architectures, hosted pretrained model, or the electra trainer that in a pr for a long time ?

  2. Also I am wondering did you "get less than desired results" with this implementation so you want to try with different data ? If so, there's might be something I should do or could help.

  3. I am glad that you like this. But this project is actually for my personal research, and I spent unexpectedly too much time on it. So currently there is no plan to add data for other language or improve the user interface.

You can try explore datasets first : https://huggingface.co/datasets Or try to use your own dataset in hf/nlp: https://huggingface.co/nlp/loading_datasets.html#from-local-files If you have problems applying your hf/nlp datasets to this implementation, you can open a issue and I try to help you.

dumitrescustefan commented 4 years ago

Thanks for the quick response. By electra implementation from HF I mean the electra-trainer branch from HF (the only trainer I managed to get to work) that's in the PR for a long time. What I am trying to do is to pretrain electra (small for now) on a different dataset (and other language). By less than desired results I mean that I am getting a rather poor performance (more than 20 points below a pretrained bert on the same dataset, though this was on an electra checkpoint with only 150K steps - imho the difference should be much smaller, even for only 150K steps with batchsize 128).

So, given the facts that you identified that bug in the code, plus that it is pretty cumbersome to use electra-trainer, I was wondering if you plan to edit your code such as to allow an external txt file to serve as the training corpus. (basically what HF's transformer classes LineByLineDataset and the DataCollator are doing now to allow to train on any text).

I will try your suggestion with the HF/nlp to load a local dataset, and I'll come back with a status update. That should skip the need of the LineByLineDataset and others. Thanks!

richarddwang commented 4 years ago

Best wishes for you !