Open jarumihooi opened 10 months ago
How does one make the process more automatic than downloading the eval repo, placing the mmif predictions in the correct place and running the code?
It seems like the invocation of evaluation could be automatically triggered if a new commit of prediction mmifs is placed in a certain place, for instance, inside a subdir/task-dir of the aapb-evaluations repository. This could trigger an action that will run evaluation code automatically.
Because
the goal is to improve the automation of future evaluation tasks (as opposed to updating current evaluations to become automatic), brainstorming of what components could be used to increase the automaticness of running the evaluations should be considered.
This issue will focus on what formats/templates/common practices should be adhered to allow for better automation into this process.
Done when
Additional context
Questions to consider for moving towards automation:
What other todos and concerns should improve this process flow?