tstandley / image2mass

http://proceedings.mlr.press/v78/standley17a/standley17a.pdf
37 stars 16 forks source link

EMMa #10

Open HHLiufighting opened 4 months ago

HHLiufighting commented 4 months ago

Hi, I'm Liuhonghao.

I studied your EMMa article and I think it is very meaningful work. In which you say that each task has a specific decoder architecture designed manually. See the code for details, so can you study the code for your training and testing. Regarding using existing data to predict the remaining relevant missing labels, how is the specific code implemented?

Thank you very much in advance, Liuhonghao email: [liuhonghao_q@foxmail.com]

tstandley commented 4 months ago

Thanks, glad to hear there is interest. I haven't put that code up yet. If I don't do it by the end of next week can you send me a reminder?

Thanks for your patience!

On Fri, Mar 22, 2024, 7:37 AM HHLiufighting @.***> wrote:

Hi, I'm Liuhonghao.

I studied your EMMa article and I think it is very meaningful work. In which you say that each task has a specific decoder architecture designed manually. See the code for details, so can you study the code for your training and testing. Regarding using existing data to predict the remaining relevant missing labels, how is the specific code implemented?

Thank you very much in advance, Liuhonghao email: @.***

— Reply to this email directly, view it on GitHub https://github.com/tstandley/image2mass/issues/10, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEDABXUL24NUFQQUL7G2IVDYZQ62TAVCNFSM6AAAAABFDOIZJKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIYDENRVGAZDINY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

HHLiufighting commented 3 months ago

Thanks, glad to hear there is interest. I haven't put that code up yet. If I don't do it by the end of next week can you send me a reminder? Thanks for your patience! On Fri, Mar 22, 2024, 7:37 AM HHLiufighting @.> wrote: Hi, I'm Liuhonghao. I studied your EMMa article and I think it is very meaningful work. In which you say that each task has a specific decoder architecture designed manually. See the code for details, so can you study the code for your training and testing. Regarding using existing data to predict the remaining relevant missing labels, how is the specific code implemented? Thank you very much in advance, Liuhonghao email: @. — Reply to this email directly, view it on GitHub <#10>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEDABXUL24NUFQQUL7G2IVDYZQ62TAVCNFSM6AAAAABFDOIZJKVHI2DSMVQWIX3LMV43ASLTON2WKOZSGIYDENRVGAZDINY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

I am very happy that you can share the training code of EMMa, but I still have a question to ask you. In EMMa, you said that the model that takes into account images, text and other attributes does better than the model that only takes into account a single image or product list text. Better, how is this done? What are the inputs in Table 1? For example, if you want to predict quality and only consider pictures or text, what are their inputs?

Thank you very much in advance, Liuhonghao email: [liuhonghao_q@foxmail.com]