https://github.com/darpa-sail-on/Sail-On-API/pull/19 changes the evaluation to only occur after a session is terminated. However, in the client, we always use evaluate in a session and within rounds since this allows us to evaluate algorithm performance on a test as samples are introduced. Furthermore, metrics are computed on tests thus completing sessions shouldn't be a requirement to run evaluation. @rwgdrummer, could you revert the changes made in https://github.com/darpa-sail-on/Sail-On-API/pull/19 since this causes test failures downstream in the client? Also if you are making changes that add constraints over a request that the client uses, can you please ask for a review from a member of the Kitware team so that we are in the loop of these changes and don't find them after the CI fails on different repositories. At the moment we will skip the evaluate test in PARInterface until this is fixed.
https://github.com/darpa-sail-on/Sail-On-API/pull/19 changes the evaluation to only occur after a session is terminated. However, in the client, we always use evaluate in a session and within rounds since this allows us to evaluate algorithm performance on a test as samples are introduced. Furthermore, metrics are computed on tests thus completing sessions shouldn't be a requirement to run evaluation. @rwgdrummer, could you revert the changes made in https://github.com/darpa-sail-on/Sail-On-API/pull/19 since this causes test failures downstream in the client? Also if you are making changes that add constraints over a request that the client uses, can you please ask for a review from a member of the Kitware team so that we are in the loop of these changes and don't find them after the CI fails on different repositories. At the moment we will skip the
evaluate
test inPARInterface
until this is fixed.