Closed samhotep closed 2 months ago
Attention: Patch coverage is 85.71429%
with 2 lines
in your changes missing coverage. Please review.
Project coverage is 85.84%. Comparing base (
0fd7100
) to head (d1eb6b7
). Report is 9 commits behind head on main.:exclamation: Current head d1eb6b7 differs from pull request most recent head c9f091d
Please upload reports for the commit c9f091d to get more accurate results.
Files | Patch % | Lines |
---|---|---|
migrations/versions/c27fec1bfee2_.py | 83.33% | 2 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
@mtruj013 Sorry, I didn't explain well enough in the PR description, I'll update it as well
Basically, what we had before was the descending sort, which took about 15-20s to complete for nearly any GET
request to /security/notices
. So we took out the sorting, which freed up resources for requests to other endpoints, but now the endpoint returns records in an uncertain/unsorted order, meaning we can't take advantage of the limit and offset parameters, and the /security/notices
page is returning data in the wrong order
As a solution, I added the two indexes and changed to an ascending sort which speeds up lookups using offset
and limit
(this is because newer records tend to be further along in the db, even though the data isn't explicitly sorted). To make it compatible with our /security/notices
page, I added a new endpoint to return the total number of records, and this PR, such that we can index the "ascending-sorted" data in reverse order, and show the most recent records on the first page of the /security/notices
page.
Why we need a new endpoint, yet the /notices.json
returns the total number of notices, is that if we use ascending order, and we want to get the most recent record, then we also need the total beforehand in order to get the correct value for the offset. The endpoint is fast enough (~100ms) to be included in each request to /notices
This approach speeds up the lookups, and most of the request time is taken up by serialization of the db data
I did also try clustering the db by the published column, but it instead caused a slowdown for all queries, and no noticable improvement to the sorting speed.
@mtruj013 I found a better way to fix this issue by reworking the query
LGTM!
Woohoooo!
Done
cves
andnotices
are manually added to db objectsQA
./run serve
Issue / Card
https://github.com/canonical/ubuntu.com/issues/13957 https://github.com/canonical/ubuntu.com/issues/13956
Fixes WD-11938