Closed jetyingjia closed 2 months ago
Hi, @jetyingjia
1
.BTW, there should be 60 days to compute 1B EVA-CLIP-E embeddings if using 8 NVIDIA A100 😅.
Hi, @PhyscalX 1、This means the model D‘s classify branch target is the concept distribution(image embeeding project to 2560-dimension distribution logits), not the region pseudo label(many paper use pseudo label, eg:OWL) 2、The idea of learn the concept distribution, have other paper recommended? Thank you!
@PhyscalX Good idea,Do you have the plan to release the full project(including training)? As I want to fine-tune this model in my datasets.
Refer to issue #5, currently, we have no plan to release the full code. Instead, we have released the visual prompter and losses for pre-training and fine-tuning.
Awesome work, Congratulations! I have some questions about the Model D training. 1、In this model,Pre-train with [Mask,Concept],this concept means the text embeddings(2560 categories)? Than how get this concept to 1B masks? 2、In this paper, get 2.25TB image embedding. How use this data?