zyh-uaiaaaa / Erasing-Attention-Consistency

Official implementation of the ECCV2022 paper: Learn From All: Erasing Attention Consistency for Noisy Label Facial Expression Recognition
77 stars 15 forks source link

MobileNet pretrained model #17

Closed lzh-captain closed 10 months ago

lzh-captain commented 10 months ago

Did MobileNet in Table 2 use a pre-trained model? Can you provide the download link for this pre-trained model?

zyh-uaiaaaa commented 10 months ago

Hi,

You could simply load the MobileNet from PyTorch, like

import torch model = torch.hub.load('pytorch/vision:v0.10.0', 'mobilenet_v2', pretrained=True)

lzh-captain commented 10 months ago

Thanks, Is MobileNe not pre-trained on MS-Celeb-1M? Pre-training ResNet-50 on MS-Celeb-1M can help improve accuracy. If MobileNe is not pre-trained on MS-Celeb-1M, would it be unfair to compare the two models?

zyh-uaiaaaa commented 10 months ago

The motivation of the experiment is to study the generalization ability of our method under different backbones. Thus, I use the same backbone and compare the accuracy before and after using my method. I believe using MS-Celeb-1M pre-trained MobileNet could further improve the accuracy. However, I do not have access to the MS-Celeb-1M pre-trained MS-Celeb-1M. If you have one, please kindly share it with us. Thanks very much!

--------------原始邮件-------------- 发件人:"LZH @.>; 发送时间:2023年9月1日(星期五) 晚上6:30 收件人:"zyh-uaiaaaa/Erasing-Attention-Consistency" @.>; 抄送:"张宇航 @.>;"Comment @.>; 主题:Re: [zyh-uaiaaaa/Erasing-Attention-Consistency] MobileNet pretrained model (Issue #17)

Thanks, Is MobileNe not pre-trained on MS-Celeb-1M? Pre-training ResNet-50 on MS-Celeb-1M can help improve accuracy. If MobileNe is not pre-trained on MS-Celeb-1M, would it be unfair to compare the two models?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

lzh-captain commented 10 months ago

Thank you for your response, I'm sorry, I don't have access to the MS-Celeb-1M pre-trained MS-Celeb-1M either.

lzh @.***

 

------------------ 原始邮件 ------------------ 发件人: "zyh-uaiaaaa/Erasing-Attention-Consistency" @.>; 发送时间: 2023年9月1日(星期五) 晚上7:15 @.>; @.**@.>; 主题: Re: [zyh-uaiaaaa/Erasing-Attention-Consistency] MobileNet pretrained model (Issue #17)

The motivation of the experiment is to study the generalization ability of our method under different backbones. Thus, I use the same backbone and compare the accuracy before and after using my method. I believe using MS-Celeb-1M pre-trained MobileNet could further improve the accuracy. However, I do not have access to the MS-Celeb-1M pre-trained MS-Celeb-1M. If you have one, please kindly share it with us. Thanks very much!

--------------原始邮件-------------- 发件人:"LZH @.>; 发送时间:2023年9月1日(星期五) 晚上6:30 收件人:"zyh-uaiaaaa/Erasing-Attention-Consistency" @.>; 抄送:"张宇航 @.>;"Comment @.>; 主题:Re: [zyh-uaiaaaa/Erasing-Attention-Consistency] MobileNet pretrained model (Issue #17)


Thanks, Is MobileNe not pre-trained on MS-Celeb-1M? Pre-training ResNet-50 on MS-Celeb-1M can help improve accuracy. If MobileNe is not pre-trained on MS-Celeb-1M, would it be unfair to compare the two models?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.> — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.>