Closed stveit closed 4 months ago
Attention: Patch coverage is 95.83333%
with 1 line
in your changes missing coverage. Please review.
Project coverage is 59.52%. Comparing base (
a2be786
) to head (93fca87
). Report is 87 commits behind head on 5.10.x.
Files | Patch % | Lines |
---|---|---|
python/nav/asyncdns.py | 95.83% | 1 Missing :warning: |
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
11 files 11 suites 11m 8s :stopwatch: 3 325 tests 3 325 :heavy_check_mark: 0 :zzz: 0 :x: 9 443 runs 9 443 :heavy_check_mark: 0 :zzz: 0 :x:
Results for commit 93fca874.
:recycle: This comment has been updated with latest results.
Fixes #2669
Limits concurrent parallel lookups to 100 at a time.
Tested manually that machine tracker no longer crashes from hitting the file descriptor limit with
ulimit
set as low as 200 on a search with 4096 results. This would previously crash instantlyThe amount of parallel lookups should perhaps be configurable but that can be separate pr if we see the need
A practical change I made is that saving the results from a lookup to the dict
self.results
will now happen during the parallel lookups, instead of at the end. I made this change simply because usingCooperator
changes how theDefferedList
we use look like. Instead of all the separate lookup tasks being in the list, the list now contains 100 iterators. I dont know how to add a callback to the list that will actually be able to go through all the results produced instead of just going over the iterators themselves, so I moved the saving step to be part of the smaller work units.Errors that were previously raised by themselves are now eaten up by the
Cooperator
. Tried to fix this by saving all errors then raising the first one that occurred. I expect this keeps the interfacing with theasyncdns
module intact, but side effect is that it will only raise the exceptions after its done running, so its slower to fail