Closed A-zhudong closed 2 years ago
@A-zhudong I have the same question as you. Waiting for reply.
I have the same question as you. Waiting for reply.
Hello, sorry for the late reply!
As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen. But the LR of each blocks in different, you can find it in here
Hello, sorry for the late reply!
As mentioned in BEiT and MAE, in the end to end fintuning procedure, No blocks/layers need beed frozen. But the LR of each blocks in different, you can find it in here
Thanks for your reply.
But it seems that they did frozen some blocks and tested the effect as mentioned in "Masked Autoencoders Are Scalable Vision Learners".
Maybe it is better if we implement that part?
Oh, I am sorry that I forget this.
You just need to freeze the blocks you want to, in the init function.
OK, thank you.
Hello, thanks for your implementation.
I have read the main part of your code, but I didn't find the code that controls the Partial fine-tuning. Could you please tell me where is that part in "run_class_finetuning.py", "modeling_finetune.py" or anywhere else?
Wainting for your reply, thank you.