richarddwang / electra_pytorch

Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)
324 stars 41 forks source link

How can I pretrain ELECTRA starting from weights from google ? #26

Closed richarddwang closed 3 years ago

richarddwang commented 3 years ago

This issue is to answer the question from hugggingface forum.

Although I haven't tried it, it should be possible.

  1. Make sure my_model is set to False to use huggingface model https://github.com/richarddwang/electra_pytorch/blob/ab29d03e69c6fb37df238e653c8d1a81240e3dd6/pretrain.py#L43

  2. Change model(config) -> model.from_pretrained(model_name) https://github.com/richarddwang/electra_pytorch/blob/ab29d03e69c6fb37df238e653c8d1a81240e3dd6/pretrain.py#L364-L365

  3. Be careful about size, max_length, and other configs https://github.com/richarddwang/electra_pytorch/blob/ab29d03e69c6fb37df238e653c8d1a81240e3dd6/pretrain.py#L38 https://github.com/richarddwang/electra_pytorch/blob/ab29d03e69c6fb37df238e653c8d1a81240e3dd6/pretrain.py#L76-L81

Note: ELECTRA models published are actually ++ model described in appendix D, and max sequence length of ELECTRA-Small/ Small++ is 128/512.

Feel free to tag me if you have other questions.

lucaguarro commented 3 years ago

Thank you for responding to my question. I got it working but am perhaps getting strange results from the training process. It always reports a training loss of 0.000000. Is this just because the model has already been well-trained enough?

Also is it normal for each training epoch to take only 1-2 seconds? Or is this a sign that my dataset that I set up was poorly configured?

Here is a screenshot of the output of the training process electrapretraindebug

richarddwang commented 3 years ago

There should be an error caught.

Because fastai didn't support specifying training steps, I wrote an callback myself to do that. The side effect is it will catch any error we encountered.

So you can comment out this callback, run it again, and you will see the error. After you resolve the error, you can add it back and do the normal training. https://github.com/richarddwang/electra_pytorch/blob/ab29d03e69c6fb37df238e653c8d1a81240e3dd6/pretrain.py#L394

lucaguarro commented 3 years ago

Oh perfect thank you. I was getting an error because I added a special token to the tokenizer and needed to notify the generator and discriminator of the new size of the token embeddings.

I am however getting a memory error now. Usually I resolve this by just lowering the batch size but I am not so sure where this is set in your code?

I am using a Nvidia Tesla P100 and this is the error message:

RuntimeError: CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 15.90 GiB total capacity; 14.22 GiB already allocated; 291.75 MiB free; 14.73 GiB reserved in total by PyTorch)

Sorry to ask so many questions.

richarddwang commented 3 years ago

No worries !

Here is where batch size set. https://github.com/richarddwang/electra_pytorch/blob/ab29d03e69c6fb37df238e653c8d1a81240e3dd6/pretrain.py#L79

You can change it by c.bs = whatever after it.

lucaguarro commented 3 years ago

Awesome, I got it working! I did have to lower my batch size all the way down to 32 w/ Google Colab pro though (quite a bit lower than your presets)

On another note, I took notice of your "multi_task.py" file and it interested me for my own research as well but I'll open a new issue so as to not bog this one down

congchan commented 2 years ago

Side question, How can we pretrain ELECTRA starting from weights from other pretrained models, such as roberta?

richarddwang commented 2 years ago

There's no direct way to do this. As a workaround, take generator for example. You can refer to the source code and write a ElectraForMaskedLMWithAnyModel that takes a pretrained AutoModel instance as an argument.

JiazhaoLi commented 2 years ago

Hi, Thank you for the wonderful code. I try to continue training based on google ELECTRA checkpoints. I followed the step in this post. I also comment out

RunSteps(c.steps, [0.0625, 0.125, 0.25, 0.5, 1.0], c.runname+"{percent}"),

However, I still got the following error, which encountered in fastai learner file. Do you have any hints on this, I appreciate it.

`Traceback (most recent call last): File "pretrain.py", line 405, in cbs=[mlm_cb], .....

File "/home/anaconda3/envs/electra/lib/python3.7/site-packages/fastai/learner.py", line 137, in _call_one [cb(event_name) for cb in sort_by_run(self.cbs)] NameError: name 'sort_by_run' is not defined`

I am not sure whether it is due to the package version.

stvhuang commented 1 year ago

Hi @JiazhaoLi.

Do you solve the problem (sort_by_run not found)? I also met the same error recently.

Update: This error can be solved by downgrading fastcore version to fastcore<=1.3.13.