IDEA-Research / T-Rex

API for T-Rex2: Towards Generic Object Detection via Text-Visual Prompt Synergy
https://deepdataspace.com/home
Other
1.98k stars 120 forks source link

Question about CA44 results #34

Closed wjn922 closed 3 months ago

wjn922 commented 3 months ago

Hi, congratulate for the wonder projects of T-Rex and T-Rex2!

I'd like to know whether the results for CA44 in paper T-Rex were reported on the valid set or the test set? Besides, do you have the specific results for Figure 7 (the values of MAE and NMAE in CA44)? Many thanks!

Mountchicken commented 3 months ago

Hi @wjn922 We are evaluating on train, val, and test sets. All datasets in this config are used for evaluation. The specific results can be found at here.

wjn922 commented 3 months ago

Thanks for your quick response!

In my understanding, you adopt the CA44 train set for training. And then directly using the model to evaluate on CA44 val/test set, FSC-147 test set, FSCD-LVIS test set. Am I correct? I am curious why you also use the CA44 train set for evaluation?

And after checking the results, I found that the performance of GroundingDINO and BMNet+ is much better than that in the paper T-Rex. What's the reason?

Many thanks in advance.

Mountchicken commented 3 months ago

Sorry for the misunderstanding. I found that the Grounding DINO and FamNet results were results for a certain version of T-Rex. I've re-updated the results and they should be correct now. For CA-44 Benchmark, we only use it for evaluation (all train, val, and test sets will be used for evaluation) and we do not train on this benchmark.

wjn922 commented 3 months ago

Thanks for your reply. Great works again!