Closed dyabel closed 3 years ago
Yes, actually this should be semantic features instead of image features. I've fixed the typos. But it will not change the fact that this is a transductive zero-shot learning setting.
I believe transductive zero-shot means using the image data of unseen classes. And semantic features of unseen classes should be known in standard ZSL and GZSL setting according to the literatures I have read.
Actually, for zero-shot learning, there can be four different settings:
I think the conflict mainly comes from item I and item II. We claim pure zero-shot learning should not access unseen data. Otherwise, if we use the labeled semantic features, the searching space of unseen classes will be highly constrained and the generalization ability of a model will be highly decreased. In general, it's true we can have better performance and understand seen/unseen relationships better with accessing unseen data and most recent papers are following this setting.
I suggest you having a look at the following inductive zero-shot learning papers if you want:
Let me know if you have any further concerns.
Actually, for zero-shot learning, there can be four different settings:
- Inductive zero-shot learning (Seen set)
- Semantic transductive zero-shot learning (Seen set + labeled unseen attributes)
- Feature transductive zero-shot learning (Seen set + unlabeled visual features)
- General transductive zero-shot learning (Seen set + labeled unseen attributes + unlabeled visual features)
I think the conflict mainly comes from item I and item II. We claim pure zero-shot learning should not access unseen data. Otherwise, if we use the labeled semantic features, the searching space of unseen classes will be highly constrained and the generalization ability of a model will be highly decreased. In general, it's true we can have better performance and understand seen/unseen relationships better with accessing unseen data and most recent papers are following this setting.
I suggest you having a look at the following inductive zero-shot learning papers if you want:
- GAZSL: Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, Ahmed Elgammal. "A Generative Adversarial Approach for Zero-Shot Learning From Noisy Texts".
- CIZSL: Mohamed Elhoseiny, Mohamed Elfeki. Creativity Inspired Zero-Shot Learning.
- CN-ZSL: Class Normalization for Zero Shot Learning. Ivan Skorokhodov, Mohamed Elhoseiny.
- GRaWD: Imaginative Walks: Generative Random Walk Deviation Loss for Improved Unseen Learning Representation. Mohamed Elhoseiny, Divyansh Jha, Kai Yi, Ivan Skorokhodov.
- Mancini, M., Akata, Z., Ricci, E., & Caputo, B. (2020). Towards recognizing unseen categories in unseen domains.
Let me know if you have any further concerns.
Thank you for your explanation, I will take a look at these papers.
As the title