Open zhaohongyin opened 4 months ago
Hi @zhaohongyin, missing code has been pushed.
@JierunChen I want to evaluate using ref-l4, but I don't see the following modules in the code. Are they planned for an update? https://github.com/JierunChen/Ref-L4/blob/10a3344fe2c6edbdc6a8aac30120c0fb107f77f9/evaluate_pred.py#L3
Hi @jay-vinin , thanks for the notice. Missing code has been pushed.
Your work has made a significant contribution to the field, and I greatly appreciate the insights and advancements you have provided.
I am particularly interested in understanding how Grounding-DINO or Grounding-DINO 1.5 performs on the Ref-L4 benchmarks. Could you please provide some information or results regarding this? And what are the advantages and disadvantages of using Grounding-DINO compared to using Large Multimodal Models?
Thank you very much for your time and assistance.
Hi @huangb23 , thanks for your interest in our work. Grounding-DINO focuses on open-vocabulary detection and phrase grounding, while our benchmark is tailored for referring expression comprehensive (REC). Phrase grounding is to locate multiple targets for short phrases while REC is to locate a unique object given a generally longer description. Thus, we do not include the Grounding-DINO in our benchmark.
It seems some evaluation code about the class of RefL4Dataset are missing ,could you please update it?