justarter / FLIP

Official Code for paper "FLIP: Fine-grained Alignment between ID-based Models and Pretrained Language Models for CTR Prediction" (RecSys 2024)
5 stars 2 forks source link

How to use PLM when pretrain in stage1 #1

Open zyk516 opened 1 month ago

zyk516 commented 1 month ago

Hello I read the code&paper and find that PLM is trainable when in the pretraining stage.Do you try to freeze PLM and only train text projection layer in the pretraining stage? If you have tried, how effective is this training method? Thanks very much.

justarter commented 1 month ago

Hello. I have not tried this method. Personally, I speculate that freezing the PLM during pretraining may still help the ID-based model perceive textual knowledge. However, since the PLM is not optimized, it may slightly weaken the performance of FLIP on downstream CTR tasks.