lihan97 / KPGT

codes for KPGT (Knowledge-guided Pre-training of Graph Transformer)
Apache License 2.0
93 stars 15 forks source link

Code on finetune #7

Closed Hanpeiling closed 3 months ago

Hanpeiling commented 6 months ago

Dear author:

I think it's an excellent work, and I'd like to ask you the following questions.

The paper that introduces KPGT states, "To fully take advantage of the abundant knowledge captured in the pre-training stage, KPGT introduces four finetuning strategies, including layer-wise learning rate decay (LLRD), re-initialization (ReInit), FLAG and, L2-SP."

Which of these four methods is used in the "finetune.py" ? Is it possible to give code for fine tuning based on LLRD, ReInit, FLAG and L2-SP?

Looking forward to your reply♥

lihan97 commented 3 months ago

Thank you for your interest in our work! Apologies for the delayed response. The fine-tuning strategies are now available in the finetune.py script. You can activate them using the following flags: --use_flag, --use_llrd, --use_l2sp, and --use_reinit.