Open yyuncong opened 1 month ago
Hi Yuncong, thanks for checking out our work!
This is my bad --- when we ran experiments, we were using a different version of the Prismatic VLM --- the official repo was not released then. It was a 13B one and I am not sure if the specific one was released at the end. If you would like to improve the performance, I would suggest using a different checkpoint from their repo and also seeing if the weighting parameters for the semantic values make sense (i.e., the semantic values look reasonable). You can also look into other newer VLMs.
Do you notice if the question answering is particularly bad? or the semantic exploration does not work?
The outcomes of the evaluation are summarized below. Notably, without any adjustments to the codebase, the achieved metrics were marginally lower than those reported in the paper:
Total cases: 500
Successful cases (weighted): 118
Successful cases (max): 120
Success rate (weighted): 23.60%
Success rate (max): 24.00%
@allenzren @yyuncong
The outcomes of the evaluation are summarized below. Notably, without any adjustments to the codebase, the achieved metrics were marginally lower than those reported in the paper:
Total cases: 500 Successful cases (weighted): 118 Successful cases (max): 120 Success rate (weighted): 23.60% Success rate (max): 24.00%
@allenzren @yyuncong
Thank you for summarizing the evaluation results! Given the fact that the questions are all multiple choices with at most 4 choices, the evaluation results suggest that the current pipeline barely helps question answering?
Hi @yyuncong @yusirhhh, thanks for looking into this! I think there is something off right now that the success rate is not even above 25% --- even if the exploration is not working as well as the original experiments, question answering should not be that bad if the VLM functions. I can look into this this weekend if that helps.
@allenzren @yyuncong I am troubleshooting this issue and would appreciate it if you could provide the images corresponding to the questions from your experiments. This would help me investigate the VQA performance and determine if the low accuracy is due to the VLM model's VQA capabilities
@allenzren When I sample views from the scene, I find that the "../hm3dsem/topdown" folder is missing. Could you please tell me how to generate the top-down files?
Hi @yyuncong @yusirhhh, thanks for looking into this! I think there is something off right now that the success rate is not even above 25% --- even if the exploration is not working as well as the original experiments, question answering should not be that bad if the VLM functions. I can look into this this weekend if that helps.
Hi! I would like to follow up on the performance issue. I was wondering if there have been any updates or progress on this matter? Thank you for your help!
Hi,
Thank you for the great work and the tremendous efforts of open-sourcing the baselines!
I tried the VLM baseline and noticed that the model performance is way lower than the reported results (more like a random selection). This is quite confusing because I did not modify the codebase (other than the data path).
Could you help me double check if the baselines are functioning properly? I am also actively investigating if this is caused by my local environment. Thank you for your help!