Closed cbizon closed 1 month ago
all, you have been assigned to review the Spring 4 PR, please take a look at the file and complete your review so Chris merges this PR. Please complete no later than eob Monday July 1
Per our conversation yesterday in DM call and today at TAQA, we should consider "removing redundant Chembl 'in clinical trials' ingests as a priority next sprint"
Completely agree. There was widespread support for improving how we handle clinical trials info at the relay, then somehow that morphed into "it's not a priority, and we're not doing it". Please, let's reconsider this!
Edited to add: On first read, I assumed Sierra's "next sprint" refers to the one we're starting. To disambiguate, I don't think this should wait until Sprint 5. I added an outline of tasks for Sprints 4 through 7 in the Sprint Planning table.
I think I'm okay with everything, with the possible exception of this:
As I understand it, the idea of TopAnswer results is to ensure we can generate ground-truth results. Unsecret is focusing on creative results rather than ground truth, and boosts the scores of creative answers. As a result, we will return ground truth answers less often than we might. This is by design, since we are responding to "creative mode" queries.
I think I'm okay with everything, with the possible exception of this:
Fix Failing Automated Tests
Fugu (CI):
- Each ARA is expected to pass 40 tests from the Sprint 4 test suite, which focuses on TopAnswer results.
As I understand it, the idea of TopAnswer results is to ensure we can generate ground-truth results. Unsecret is focusing on creative results rather than ground truth, and boosts the scores of creative answers. As a result, we will return ground truth answers less often than we might. This is by design, since we are responding to "creative mode" queries.
Would it be allowed for different ARAs to develop their own test suites and passing criteria that they think would be representative to their philosophy?
I think I'm okay with everything, with the possible exception of this:
Fix Failing Automated Tests
Fugu (CI):
- Each ARA is expected to pass 40 tests from the Sprint 4 test suite, which focuses on TopAnswer results.
As I understand it, the idea of TopAnswer results is to ensure we can generate ground-truth results. Unsecret is focusing on creative results rather than ground truth, and boosts the scores of creative answers. As a result, we will return ground truth answers less often than we might. This is by design, since we are responding to "creative mode" queries.
Would it be allowed for different ARAs to develop their own test suites and passing criteria that they think would be representative to their philosophy?
Good question.
I'll approve the changes, with this single reservation that I think would be good to discuss.
Updated with discussion from TACT call on 9/27