Closed michalskit closed 1 year ago
Thanks for your interest!
There is some conflict in our config due to the different experiment setups. Specifically, I uploaded the config of another version. We have updated the config to the right version and uploaded the known-class pretrained model. The key difference is that we use the DINO ViT-Base16 model in our paper, and "dino_deitsmall16_pretrain.pth" is only used for ablation. Feel free to ask if you have other problems with reimplementing our results.
Looking forward to meeting you in ICCV2023.
Hello, and thank you for your prompt response. I genuinely appreciate the swift updates and clarifications to the repository.
With the provided pretrained model and new script, I was able to reproduce the results for the CUB dataset. However, when I tried training with pretrained model created myself, the performance was slightly lower. Maybe it has to do with the configuration specifies, of which I'm not sure: num_labeled_classes: 98 and num_unlabeled_classes: 98. I was expecting to see num_labeled_classes: 100 and num_unlabeled_classes: 100. Is there a specific reason behind this choice?
Thank you once again for your assistance and I'm eager to dive deeper into your work.
You are right. There are still some errors in the configuration. The right one is num_labeled_classes: 100 and num_unlabeled_classes: 100.
Thanks for your attention to help me fix the bug in the repo.
Hello, and thank you for sharing the code associated with your paper, "Class-relation Knowledge Distillation for Novel Class Discovery."
I've been attempting to reproduce the results from your work over the past two weeks, and I've run into some challenges that I'd appreciate your assistance with.
Below are the configurations I've been using based on the repository/paper/commonsense:
pretrain_vit_cub.yaml:
discover_vit_cub.yaml:
Several of the parameters in the scripts are marked with "not sure if correct". Could you please confirm if these are accurate or if there are adjustments needed? Moreover, if there are any additional steps or configurations necessary to reproduce the results, I would greatly appreciate guidance on that.
I'm genuinely interested in your work and even considering attending ICCV 2023. It would be wonderful to have the chance to discuss this in person, but resolving these issues here would be beneficial for both me and perhaps other researchers who might face similar challenges.
Thank you for your time and consideration. I'm looking forward to your response.