Closed anniebtran closed 9 months ago
Will see how tomorrow goes with testing, and make a call from there.
Risk - this may not happen this sprint due to the increased scope of the ADG work
Derek to shadow.
@anniebtran Morning - Do you have the capacity to prioritize picking this ticket up today? Hoping to set expectations with Zach and team ASAP if we see something that extends the scope of testing.
@saderagsdale Annie said she does have capacity to pick this up today
@saderagsdale Just stepped through the code with Derek and the only tests that we saw that were lacking were mostly around logging and a couple of nitpicky metadata related details, but nothing glaring — major pieces of functionality seem to be covered by our specs. I'll double check the simplecov generated code coverage report but I think we're pretty good to go as far as test coverage on the backend for the various steps to submit an NOD form + evidence 👍
@anniebtran What was missing from logging and metadata? Mostly asking to ensure we're covered if any submissions fail due to metadata validation issues.
Slight segway: Are we setup to start calling the status endpoint from Lighthouse for the new NOD submission endpoint, so we can see any upstream failures? If not we can discuss how to monitor this tomorrow.
@saderagsdale
To clarify, we're just missing a couple of lines to test that the metadata is what is expected and we could add more granular unit testing as far as logging, but I just (finally) found the file where we are more thoroughly testing logs in a more integrated way (i.e. as a part of a whole process vs. individually/as a small unit or step in the process), so actually the test coverage for logging feels pretty robust 🙌
Mostly asking to ensure we're covered if any submissions fail due to metadata validation issues.
Good call — looking at the code, it appears we only use this metadata for our own records of the submission and there's no validation (as far as I can tell) that is happening with that field.
Are we setup to start calling the status endpoint from Lighthouse for the new NOD submission endpoint, so we can see any upstream failures? If not we can discuss how to monitor this tomorrow.
Is the status endpoint you're referring to related to the polling stream of work to catch upstream failures? If so, I don't think we have anything related to that set up yet.
@saderagsdale what are our next steps here? Is this something you'd like to discuss in the parking lot?
Chatted w/ Eugene, Sade, and Derek in the parking lot — the overall findings were good, the major pieces of the code needed to submit an NOD in V2 were covered by both unit and integration tests. But we wanted to have some numbers around test coverage handy in case anyone asked, so I've pulled some from the simplecov generated report.
76.47%
for the WHOLE file
create
action and 96.45%
for the WHOLE file
#submit_nod
method that performs the form and evidence submission to Lighthouse)100%
coverage for the whole file 😄
AppealSubmission.submit_nod
and performs the evidence submission steps)86.27%
for the WHOLE file
100%
for the whole file 😄
@anniebtran to update the tasks of this ticket & check off all completed boxes
@anniebtran There's one more thing. Since we're submitting to the Benefits Intake API I think we'd benefit from a review of their documentation to ensure we're setup for success in terms of schema validation. Additionally, when we do E2E testing, we'll want to use PDFs that test their requirements. What do you think?
Updated and check things off of the task list ✅
Following up on a question from @saderagsdale about making sure our logs are distinct between the different forms and it looks like NOD's logging is indeed separate and the only place where it was initially bundled together was when the veteran would add upload files to their form (pre-submission) and @data-doge took care of separating those NOD vs SC logs out here. So I think we're good on logging.
Since we're submitting to the Benefits Intake API I think we'd benefit from a review of their documentation to ensure we're setup for success in terms of schema validation. Additionally, when we do E2E testing, we'll want to use PDFs that test their requirements. What do you think?
@saderagsdale Hmm I'm not entirely sure if I have the system architecture understood correctly, but we don't directly submit to the Benefits Intake API — from what I can tell, we're going through an extra Lighthouse "layer" of API via their Decision Reviews API and from there Lighthouse calls the Benefits Intake API. We have limitations on what we're allowed to send to their Decision Reviews API (defined by a separate schema for that API) and if the data we send to the DR API is not compatible with their Benefits Intake API, then Lighthouse would probably need to update the requirements around what we're allowed to send to the DR API. So I don't believe reviewing the Benefits Intake API would be very fruitful, unless we're doing a deep dive audit to fix something upstream. Does that all make sense?
@anniebtran @data-doge Ultimately, we do need to take into account the requirements for the Benefits Intake API, at least for the purpose of test criteria in the near term. API changes by LH tend to be planned a quarter in advance.
@saderagsdale Yeah, definitely makes sense for our broader testing goals and building E2E test cases, which feels like it's beyond the scope that this ticket covers. Seems like the E2E related work covered by the ticket we discussed in refinement though, so should we call this one good?
Value Statement
As a developer I want to have comprehensive test coverage on backend logic for the NOD form So that I can have more confidence that we have appropriately handled error cases and can catch future regressions if the code is changed or updated
Background Context
We are doing a deep dive into how we QA and ensure the integrity of our transactions within the NOD form before we re-release v2 to veterans. There's been a lot of work to gather information about what issues our system may run into as a veteran submits an NOD request, so this work is to make sure our written specs have all the known cases covered.
Relevant Links / Information related to test cases [TODO]
@saderagsdale I might need your help filling this in with all the right links
Tasks
vets-api
endpoint (timebox to 3 hours)master
branchAcceptance Criteria
Definition of Ready
Definition of Done