lihan97 / KPGT

codes for KPGT (Knowledge-guided Pre-training of Graph Transformer)
Apache License 2.0
84 stars 14 forks source link

Code on finetune #7

Closed Hanpeiling closed 1 month ago

Hanpeiling commented 4 months ago

Dear author:

I think it's an excellent work, and I'd like to ask you the following questions.

The paper that introduces KPGT states, "To fully take advantage of the abundant knowledge captured in the pre-training stage, KPGT introduces four finetuning strategies, including layer-wise learning rate decay (LLRD), re-initialization (ReInit), FLAG and, L2-SP."

Which of these four methods is used in the "finetune.py" ? Is it possible to give code for fine tuning based on LLRD, ReInit, FLAG and L2-SP?

Looking forward to your reply♥

lihan97 commented 1 month ago

Thank you for your interest in our work! Apologies for the delayed response. The fine-tuning strategies are now available in the finetune.py script. You can activate them using the following flags: --use_flag, --use_llrd, --use_l2sp, and --use_reinit.