SPL has been working on some performance improvements and want to cache appointment data (using Redis). Because of the way we are currently making requests on the FE and how we handle navigation between different appointment views their cache gets prematurely invalidated yet we still have the cached id's rendered within our interface. This leads to errors, for example when cancelling an appointment with the now old/cached id rendered within the interface.
We discussed a few ways to change the way VAOS FE makes requests that would work better with their proposed caching solution:
Combine our multiple requests into one request
We currently make multiple requests when a user hits the appointment applications landing page, here are examples:
GET /vaos/v2/appointments?_include=facilities,clinics&start=2024-04-09&end=2025-06-08&statuses[]=booked&statuses[]=arrived&statuses[]=fulfilled&statuses[]=cancelled HTTP/1.1
GET /vaos/v2/appointments?_include=facilities,clinics&start=2024-01-10&end=2024-05-10&statuses[]=proposed&statuses[]=cancelled HTTP/1.1
These two requests could possibly combined into a single request, I am unaware of the reason we currently make two requests so we would need to research that.
Make requests on-demand instead of pre-fetching appointments
Instead of fetching proposed (requests) before a user navigates to the Pending appointments, fetch only pending appointments once the user has navigated to pending appointments. The same would be done for past appointments.
In general, we may want to revisit how and when we are hitting endpoints to ensure that we are doing so in the most efficient way.
Their team could implement some changes to the way they are using Redis on their end to accommodate the current way we are fetching appointments but it would result in a lot of Redis hits which possibly would defeat the purpose of some of the performance improvements.
Next Steps
Determine if we can help
Coordinate with Michael to determine best technical approach
Description
I met with Michael Robil [michael.robil@va.gov](mailto:michael.robil@va.gov) on 2024-05-07 to discuss some ongoing work on their end and how VAOS may or may not be impacting that.
Summary
Here's their issue: VAOSR-8064
SPL has been working on some performance improvements and want to cache appointment data (using Redis). Because of the way we are currently making requests on the FE and how we handle navigation between different appointment views their cache gets prematurely invalidated yet we still have the cached id's rendered within our interface. This leads to errors, for example when cancelling an appointment with the now old/cached id rendered within the interface.
We discussed a few ways to change the way VAOS FE makes requests that would work better with their proposed caching solution:
Combine our multiple requests into one request
We currently make multiple requests when a user hits the appointment applications landing page, here are examples:
GET /vaos/v2/appointments?_include=facilities,clinics&start=2024-04-09&end=2025-06-08&statuses[]=booked&statuses[]=arrived&statuses[]=fulfilled&statuses[]=cancelled HTTP/1.1
GET /vaos/v2/appointments?_include=facilities,clinics&start=2024-01-10&end=2024-05-10&statuses[]=proposed&statuses[]=cancelled HTTP/1.1
These two requests could possibly combined into a single request, I am unaware of the reason we currently make two requests so we would need to research that.
Make requests on-demand instead of pre-fetching appointments
Instead of fetching proposed (requests) before a user navigates to the Pending appointments, fetch only pending appointments once the user has navigated to pending appointments. The same would be done for past appointments.
In general, we may want to revisit how and when we are hitting endpoints to ensure that we are doing so in the most efficient way.
Their team could implement some changes to the way they are using Redis on their end to accommodate the current way we are fetching appointments but it would result in a lot of Redis hits which possibly would defeat the purpose of some of the performance improvements.
Next Steps