thuml / LogME

Code release for "LogME: Practical Assessment of Pre-trained Models for Transfer Learning" (ICML 2021) and Ranking and Tuning Pre-trained Models: A New Paradigm for Exploiting Model Hubs (JMLR 2022)
MIT License
203 stars 18 forks source link

这个问题定义和多模态预训练的区别? #17

Closed jingzhengli closed 2 years ago

jingzhengli commented 2 years ago

大佬,您好。我想问一下,这个工作的预训练,和最近的Vision-Language pre-training其实有点像是吧。你的工作预训练的数据是单模态的,他的是多模态的。目的都是作用在下游任务上。可以这么理解么。

Kyfafyd commented 2 years ago

Maybe you can refer to this Chinese blog from the authors for better understanding https://zhuanlan.zhihu.com/p/354084933

jingzhengli commented 2 years ago

Maybe you can refer to this Chinese blog from the authors for better understanding https://zhuanlan.zhihu.com/p/354084933 非常感谢,我现在理解了。也特别感谢作者也开源了leep和NCE算法的代码。