Closed Andrew-Dickinson closed 3 months ago
Hmm I may abandon this effort. Even with the best case performance of the backend (cache hit, zero network overhead due to local hosting), we perform significantly worse than the existing map. I think looking into some kind of batch pre-computation solution similar to the existing map is probably the way to go...
https://github.com/user-attachments/assets/0ec75925-b5e8-48e7-bcf3-f6b67539883b
Attention: Patch coverage is 99.20000%
with 1 line
in your changes missing coverage. Please review.
Project coverage is 94.79%. Comparing base (
0e1b434
) to head (85b31dd
).
Files | Patch % | Lines |
---|---|---|
src/meshapi/util/map_data.py | 98.70% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Confirmed with Willard on a call tonight, we're gonna kill this one and do this caching by pre-baking the map data in the npm build inside the docker container which serves the map, similar to the behavior of the original map
This PR adds
django-cacheback
which allows the content of the map endpoints to be loaded asynchronously into redis for caching purposes. Then at request time, Django just pulls the JSON blob from Redis and serves it, saving expensive database calls