allenai / allenact

An open source framework for research in Embodied-AI from AI2.
https://www.allenact.org
Other
316 stars 52 forks source link

export ALLENACT_VAL_METRICS = /path/to/metrics__val_*.json #265

Open 98ming opened 3 years ago

98ming commented 3 years ago

Problem / Question

How to generate metrics_val.json files? image Why can the code only generate metrics_test.json files

jordis-ai2 commented 3 years ago

The current supported way to get metrics for validation tasks is to run the test mode with those tasks, but we'll consider this for future releases.

Lucaweihs commented 3 years ago

@98ming Does the above make sense and work for you? I agree that this is a bit confusing, really it should say "_inference{}" or something similar as you can change the tasks you're "testing" to be anything you'd like by modifying the

     def test_task_sampler_args(...) -> Dict[str, Any]

function in your experiment config. Here's an example of this being done for the AI2-THOR rearrangement task, notice that changing which lines are commented out will change which tasks are evaluated. You can better organize these by adding an --extra_tag when running the tests from the command line.