JierunChen / Ref-L4

Evaluation code for Ref-L4, a new REC benchmark in the LMM era
MIT License
20 stars 0 forks source link

Lack of complete evaluation code #1

Open zhaohongyin opened 5 months ago

zhaohongyin commented 5 months ago

It seems some evaluation code about the class of RefL4Dataset are missing ,could you please update it?

JierunChen commented 5 months ago

Hi @zhaohongyin, missing code has been pushed.

jay-vinin commented 4 months ago

@JierunChen I want to evaluate using ref-l4, but I don't see the following modules in the code. Are they planned for an update? https://github.com/JierunChen/Ref-L4/blob/10a3344fe2c6edbdc6a8aac30120c0fb107f77f9/evaluate_pred.py#L3

JierunChen commented 4 months ago

Hi @jay-vinin , thanks for the notice. Missing code has been pushed.

huangb23 commented 4 months ago

Your work has made a significant contribution to the field, and I greatly appreciate the insights and advancements you have provided.

I am particularly interested in understanding how Grounding-DINO or Grounding-DINO 1.5 performs on the Ref-L4 benchmarks. Could you please provide some information or results regarding this? And what are the advantages and disadvantages of using Grounding-DINO compared to using Large Multimodal Models?

Thank you very much for your time and assistance.

JierunChen commented 4 months ago

Hi @huangb23 , thanks for your interest in our work. Grounding-DINO focuses on open-vocabulary detection and phrase grounding, while our benchmark is tailored for referring expression comprehensive (REC). Phrase grounding is to locate multiple targets for short phrases while REC is to locate a unique object given a generally longer description. Thus, we do not include the Grounding-DINO in our benchmark.