jialuli-luka / PanoGen

Code and Data for Paper: PanoGen: Text-Conditioned Panoramic Environment Generation for Vision-and-Language Navigation
73 stars 5 forks source link

How could I get the eval_metrics in the test split? #4

Closed tpxbps closed 1 year ago

tpxbps commented 1 year ago

hi! I found these code in VLN-DUET/map_nav_src/r2r/main_nav.py

            if 'test' not in env_name:
                score_summary, _ = env.eval_metrics(preds)
                loss_str = "Env name: %s" % env_name
                for metric, val in score_summary.items():
                    loss_str += ', %s: %.2f' % (metric, val)
                write_to_record_file(loss_str+'\n', record_file)

but how can I get the eval_metrics when env_name='test' thanks!!!

jialuli-luka commented 1 year ago

Hi, You need to submit the result file to the test leaderboard (e.g., https://eval.ai/web/challenges/challenge-page/97/leaderboard/270) to get the test performance. Best, Jialu

tpxbps commented 1 year ago

I see, thank you so much!