Closed Jackieam closed 1 year ago
Thanks for your interesting.
Yes, the method for generating the support set for detection task is the same as the segmentation task. Especially the Unsup part.
For the SupPR part, you need to train another feature extractor for the detection task.
Thank you for your reply.
Please allow me to ask for specific details.
Are you using the same validation set as MAE-VQGAN (i.e. 2012_val_flattened_set.pth
) for the detection task?
For example, in Unsup, is the training set from "pascal-5i/VOC2012/ImageSets/Main/train.txt"
?
Is the support dataset selected by the similarity comparison and saved in a format like "2012_support_set.pth"
?
So that the results can be generated by "evaluate_detection/voc.py"
?
Thank you for your reply.
Please allow me to ask for specific details.
Are you using the same validation set as MAE-VQGAN (i.e.
2012_val_flattened_set.pth
) for the detection task? For example, in Unsup, is the training set from"pascal-5i/VOC2012/ImageSets/Main/train.txt"
?Is the support dataset selected by the similarity comparison and saved in a format like
"2012_support_set.pth"
? So that the results can be generated by"evaluate_detection/voc.py"
?
The setting I have used in this project: https://github.com/amirbar/visual_prompting
Hello!
I am quite fascinated by your work and have gained significant knowledge from it. However, I am encountering some difficulty with the detection task. It appears that the support pairs have been preloaded. After reviewing the code you've published, I was unable to locate them.
Could you kindly assist me in generating the support set for detection task, similar to the one in the segmentation part?