yoctta / multiple-attention

The code of multi-attention deepfake detection
241 stars 54 forks source link

Thanks for your share #5

Open IItaly opened 3 years ago

IItaly commented 3 years ago

Hey, @yoctta Thanks for pretrained models. I want to know what is the ckpt_35.pth?

yoctta commented 3 years ago

Please check the update, you should pretrain the backbone and init the multi-attention model with a pretrained checkpoint

From: IItaly Sent: 2021年7月6日 16:05 To: yoctta/multiple-attention Cc: yoctta; Mention Subject: [yoctta/multiple-attention] Thanks for your share (#5)

Hey, @yoctta Thanks for pretrained models. I want to know what is the ckpt_35.pth? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

yoctta commented 3 years ago

I've updated main.py to clearify it. ckpt_35.pth should be a pretrained backbone model. From: @. @.> on behalf of IItaly @.***>Sent: Tuesday, July 6, 2021, 16:05To: yoctta/multiple-attentionCc: yoctta; MentionSubject: [yoctta/multiple-attention] Thanks for your share (#5) Hey, @yoctta Thanks for pretrained models. I want to know what is the ckpt_35.pth?

—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe.

IItaly commented 3 years ago

Thanks for your reply. Now I only have one gpu,but there is a distributed version.May I set the gpu_ids as '0'? Is it enough?

yoctta commented 3 years ago

You can try to run it, I think the code can also work with 1 GPU, but limited batch size may be an issue.

IItaly commented 3 years ago

OK. I'll have a try.Thank u