ChangwenXu98 / TransPolymer

Implementation of "TransPolymer: a Transformer-based language model for polymer property predictions" in PyTorch
MIT License
53 stars 19 forks source link

Different block_size for pretrain and finetune #3

Closed charlesxu90 closed 1 year ago

charlesxu90 commented 1 year ago

Dear @ChangwenXu98 ,

I found the block_size of pretraining and finetuning different in the config files. The block_size for pretraining is 175, but that for fine-tuning is 411.

As block_size would influence the size of the pretrained model, I'm wondering should this parameter be the same for the two tasks in order to load the pretrained model for finetuning?

charlesxu90 commented 1 year ago

Or you can training on pretraining dataset with a small block_size, and fine-tune on the small dataset with a larger block_size?

That seems not reasonable to me as block_size influence the number of parameters of a BERT model.

ChangwenXu98 commented 1 year ago

Hi @charlesxu90,

Thanks for the question. The block_size is used to determine the maximum number of tokens for a sequence in one single task. Therefore, This hyperparameter is determined by the longest sequence length in the dataset (We don't want to truncate the sequence since the last few tokens also convey important information). We can definitely fix it to a large number to allow for all the datasets we use. However, in our case, the sequence length distributions of the datasets we use differ a lot. There's one dataset whose maximum sequence length is 411 and two datasets whose maximum sequence length is only 60. If we set block_size=411 for each task, the polymer sequences from short-length datasets will contain hundreds of mask tokens, which is a waste of memory. So this is the reason that we have different block_size for different tasks. Besides, there's a "max_position_embeddings" hyperparameter. As long as this one is fixed, the pretrained model could be loaded without any problem.

Hope this helps.

charlesxu90 commented 1 year ago

Thanks for the explanation. I thought 'max_position_embeddings' should be the same as 'block_size'. This is interesting to know such a difference.