Stanford-ILIAD / explore-eqa

Public release for "Explore until Confident: Efficient Exploration for Embodied Question Answering"
https://explore-eqa.github.io/
16 stars 2 forks source link

Baseline Low Performance #2

Open yyuncong opened 1 month ago

yyuncong commented 1 month ago

Hi,

Thank you for the great work and the tremendous efforts of open-sourcing the baselines!

I tried the VLM baseline and noticed that the model performance is way lower than the reported results (more like a random selection). This is quite confusing because I did not modify the codebase (other than the data path).

Could you help me double check if the baselines are functioning properly? I am also actively investigating if this is caused by my local environment. Thank you for your help!

allenzren commented 4 weeks ago

Hi Yuncong, thanks for checking out our work!

This is my bad --- when we ran experiments, we were using a different version of the Prismatic VLM --- the official repo was not released then. It was a 13B one and I am not sure if the specific one was released at the end. If you would like to improve the performance, I would suggest using a different checkpoint from their repo and also seeing if the weighting parameters for the semantic values make sense (i.e., the semantic values look reasonable). You can also look into other newer VLMs.

allenzren commented 4 weeks ago

Do you notice if the question answering is particularly bad? or the semantic exploration does not work?

yusirhhh commented 2 weeks ago

The outcomes of the evaluation are summarized below. Notably, without any adjustments to the codebase, the achieved metrics were marginally lower than those reported in the paper:

Total cases: 500
Successful cases (weighted): 118
Successful cases (max): 120
Success rate (weighted): 23.60%
Success rate (max): 24.00%

@allenzren @yyuncong

yyuncong commented 2 weeks ago

The outcomes of the evaluation are summarized below. Notably, without any adjustments to the codebase, the achieved metrics were marginally lower than those reported in the paper:

Total cases: 500
Successful cases (weighted): 118
Successful cases (max): 120
Success rate (weighted): 23.60%
Success rate (max): 24.00%

@allenzren @yyuncong

Thank you for summarizing the evaluation results! Given the fact that the questions are all multiple choices with at most 4 choices, the evaluation results suggest that the current pipeline barely helps question answering?

allenzren commented 2 weeks ago

Hi @yyuncong @yusirhhh, thanks for looking into this! I think there is something off right now that the success rate is not even above 25% --- even if the exploration is not working as well as the original experiments, question answering should not be that bad if the VLM functions. I can look into this this weekend if that helps.

yusirhhh commented 1 week ago

@allenzren @yyuncong I am troubleshooting this issue and would appreciate it if you could provide the images corresponding to the questions from your experiments. This would help me investigate the VQA performance and determine if the low accuracy is due to the VLM model's VQA capabilities

yusirhhh commented 1 week ago

@allenzren When I sample views from the scene, I find that the "../hm3dsem/topdown" folder is missing. Could you please tell me how to generate the top-down files?

allenzren commented 5 days ago

@yusirhhh I added the script that I used for getting the topdown views. I literally went to the hm3d website and downloaded the topdown views there (example) --- it is not that high resolution and I didn't use it to generate questions.

yyuncong commented 2 days ago

Hi @yyuncong @yusirhhh, thanks for looking into this! I think there is something off right now that the success rate is not even above 25% --- even if the exploration is not working as well as the original experiments, question answering should not be that bad if the VLM functions. I can look into this this weekend if that helps.

Hi! I would like to follow up on the performance issue. I was wondering if there have been any updates or progress on this matter? Thank you for your help!