Yangsenqiao / vida

[ICLR 2024] ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation
MIT License
48 stars 4 forks source link

Is the paper accepted to the NIPS? #2

Closed chenrxi closed 5 months ago

chenrxi commented 10 months ago

Is the paper accepted to the NIPS?

Yangsenqiao commented 10 months ago

I regret to inform you that, despite receiving two "Weak Accept" and two "Borderline Accept" reviews, our paper was ultimately not accepted by the NeurIPS 2023. We deeply appreciate all the valuable feedback provided by the reviewers and the AC. We are refining our work to further enhance the quality of our paper.

chenrxi commented 10 months ago

A little bit surprised. May I ask what's the most serious problem the reviewers raised?

liujiaming1996 commented 10 months ago

A little bit surprised. May I ask what's the most serious problem the reviewers raised?

The final scores from the four reviewers consist of two "weakly accept" (Reviewer DEpm, Reviewer 56Wj) and two "borderline accept" (Reviewer vo5W, Reviewer dPMr) ratings. Notably, Reviewer vo5W, after reviewing our rebuttal, expressed, "I would like to update my rating and lean towards acceptance." Reviewer dPMr, on the other hand, had assigned a "borderline accept" score prior to the rebuttal and did not participate in any discussions.

Furthermore, the remaining three reviewers who engaged in discussions all mentioned that "most/almost of my concerns are addressed," and two reviewers increased their scores after the discussion. Specifically, these three reviewers all believed that the theoretical justification and intuition behind the different domain representations of our ViDAs have been acknowledged as resolved. Additionally, Reviewer DEpm has recognized that the design motivations of the teacher model have also been addressed.

Finally, both Reviewer DEpm and Reviewer dPMr acknowledged that "our paper is easy to follow and performs well," while Reviewer DEpm and Reviewer 56Wj also emphasized that "our proposed method seems well-motivated and is a nice idea." Specifically, we first comprehensively explore the different domain representations of the adapters with trainable high-rank and low-rank embedding space. Then we inject Visual Domain Adapters (ViDA) into the pre-trained model, which leverages high-rank and low-rank prototypes to adapt the current domain distribution and maintain the continual domain-shared knowledge, respectively.

However, we are also surprised that our paper has not been accepted. We will incorporate the additional content provided in the rebuttal into the main paper and continue to enhance the quality of our paper.