Open 98ming opened 3 years ago
The current supported way to get metrics for validation tasks is to run the test mode with those tasks, but we'll consider this for future releases.
@98ming Does the above make sense and work for you? I agree that this is a bit confusing, really it should say "_inference{}" or something similar as you can change the tasks you're "testing" to be anything you'd like by modifying the
def test_task_sampler_args(...) -> Dict[str, Any]
function in your experiment config. Here's an example of this being done for the AI2-THOR rearrangement task, notice that changing which lines are commented out will change which tasks are evaluated. You can better organize these by adding an --extra_tag
when running the tests from the command line.
Problem / Question
How to generate metrics_val.json files? Why can the code only generate metrics_test.json files