department-of-veterans-affairs / va.gov-team

Public resources for building on and in support of VA.gov. Visit complete Knowledge Hub:
https://depo-platform-documentation.scrollhelp.site/index.html
282 stars 203 forks source link

[spike] NOD | Audit backend tests #71569

Closed anniebtran closed 9 months ago

anniebtran commented 10 months ago

Value Statement

As a developer I want to have comprehensive test coverage on backend logic for the NOD form So that I can have more confidence that we have appropriately handled error cases and can catch future regressions if the code is changed or updated


Background Context

We are doing a deep dive into how we QA and ensure the integrity of our transactions within the NOD form before we re-release v2 to veterans. There's been a lot of work to gather information about what issues our system may run into as a veteran submits an NOD request, so this work is to make sure our written specs have all the known cases covered.

Relevant Links / Information related to test cases [TODO]

@saderagsdale I might need your help filling this in with all the right links

Tasks

Acceptance Criteria

Definition of Ready

Definition of Done

saderagsdale commented 10 months ago

Will see how tomorrow goes with testing, and make a call from there.

HeatherWidmont commented 10 months ago

Risk - this may not happen this sprint due to the increased scope of the ADG work

saderagsdale commented 9 months ago

Derek to shadow.

saderagsdale commented 9 months ago

@anniebtran Morning - Do you have the capacity to prioritize picking this ticket up today? Hoping to set expectations with Zach and team ASAP if we see something that extends the scope of testing.

HeatherWidmont commented 9 months ago

@saderagsdale Annie said she does have capacity to pick this up today

anniebtran commented 9 months ago

@saderagsdale Just stepped through the code with Derek and the only tests that we saw that were lacking were mostly around logging and a couple of nitpicky metadata related details, but nothing glaring — major pieces of functionality seem to be covered by our specs. I'll double check the simplecov generated code coverage report but I think we're pretty good to go as far as test coverage on the backend for the various steps to submit an NOD form + evidence 👍

saderagsdale commented 9 months ago

@anniebtran What was missing from logging and metadata? Mostly asking to ensure we're covered if any submissions fail due to metadata validation issues.

Slight segway: Are we setup to start calling the status endpoint from Lighthouse for the new NOD submission endpoint, so we can see any upstream failures? If not we can discuss how to monitor this tomorrow.

anniebtran commented 9 months ago

@saderagsdale

To clarify, we're just missing a couple of lines to test that the metadata is what is expected and we could add more granular unit testing as far as logging, but I just (finally) found the file where we are more thoroughly testing logs in a more integrated way (i.e. as a part of a whole process vs. individually/as a small unit or step in the process), so actually the test coverage for logging feels pretty robust 🙌

Mostly asking to ensure we're covered if any submissions fail due to metadata validation issues.

Good call — looking at the code, it appears we only use this metadata for our own records of the submission and there's no validation (as far as I can tell) that is happening with that field.

Are we setup to start calling the status endpoint from Lighthouse for the new NOD submission endpoint, so we can see any upstream failures? If not we can discuss how to monitor this tomorrow.

Is the status endpoint you're referring to related to the polling stream of work to catch upstream failures? If so, I don't think we have anything related to that set up yet.

HeatherWidmont commented 9 months ago

@saderagsdale what are our next steps here? Is this something you'd like to discuss in the parking lot?

anniebtran commented 9 months ago

Chatted w/ Eugene, Sade, and Derek in the parking lot — the overall findings were good, the major pieces of the code needed to submit an NOD in V2 were covered by both unit and integration tests. But we wanted to have some numbers around test coverage handy in case anyone asked, so I've pulled some from the simplecov generated report.

Notice of Disagreements V1 controller (the API endpoint that vets-website hits when it submits the form)

76.47% for the WHOLE file

Screenshot 2024-01-09 at 3 31 20 PM Screenshot 2024-01-09 at 3 31 42 PM

Decision Review V1 Service (the interactions between our API and Lighthouse)

96.45% for the WHOLE file

AppealSubmission model (defines the #submit_nod method that performs the form and evidence submission to Lighthouse)

100% coverage for the whole file 😄

Screenshot 2024-01-09 at 4 01 22 PM

SubmitUpload job (gets queued up in AppealSubmission.submit_nod and performs the evidence submission steps)

86.27% for the WHOLE file

Decision Review Evidences Controller (the endpoint vets-website uses to upload evidence to S3 before the form is submitted)

100% for the whole file 😄

Screenshot 2024-01-10 at 4 59 33 PM
HeatherWidmont commented 9 months ago

@anniebtran to update the tasks of this ticket & check off all completed boxes

saderagsdale commented 9 months ago

@anniebtran There's one more thing. Since we're submitting to the Benefits Intake API I think we'd benefit from a review of their documentation to ensure we're setup for success in terms of schema validation. Additionally, when we do E2E testing, we'll want to use PDFs that test their requirements. What do you think?

anniebtran commented 9 months ago

Updated and check things off of the task list ✅

Following up on a question from @saderagsdale about making sure our logs are distinct between the different forms and it looks like NOD's logging is indeed separate and the only place where it was initially bundled together was when the veteran would add upload files to their form (pre-submission) and @data-doge took care of separating those NOD vs SC logs out here. So I think we're good on logging.

Since we're submitting to the Benefits Intake API I think we'd benefit from a review of their documentation to ensure we're setup for success in terms of schema validation. Additionally, when we do E2E testing, we'll want to use PDFs that test their requirements. What do you think?

@saderagsdale Hmm I'm not entirely sure if I have the system architecture understood correctly, but we don't directly submit to the Benefits Intake API — from what I can tell, we're going through an extra Lighthouse "layer" of API via their Decision Reviews API and from there Lighthouse calls the Benefits Intake API. We have limitations on what we're allowed to send to their Decision Reviews API (defined by a separate schema for that API) and if the data we send to the DR API is not compatible with their Benefits Intake API, then Lighthouse would probably need to update the requirements around what we're allowed to send to the DR API. So I don't believe reviewing the Benefits Intake API would be very fruitful, unless we're doing a deep dive audit to fix something upstream. Does that all make sense?

saderagsdale commented 9 months ago

@anniebtran @data-doge Ultimately, we do need to take into account the requirements for the Benefits Intake API, at least for the purpose of test criteria in the near term. API changes by LH tend to be planned a quarter in advance.

anniebtran commented 9 months ago

@saderagsdale Yeah, definitely makes sense for our broader testing goals and building E2E test cases, which feels like it's beyond the scope that this ticket covers. Seems like the E2E related work covered by the ticket we discussed in refinement though, so should we call this one good?