Closed supernothing closed 4 years ago
Thanks for reporting this issue. I really appreciate your time spending on this report and also on the fix. I will have deeper look on the PR this evening or tomorrow morning.
Thank you!
I am not able to reproduce it now. I will try that tomorrow on another machine.
Interesting, I'm not able to reproduce locally now either. Apologies for the bad reproducer.
Perhaps I should add more details on this issue: I started encountering it about a week ago in a service that loads and instantiates this module in multiple processes, similar to the non-working reproduction script. This code had worked previously, but had recently increased the number of processes. I could occasionally get the service to start without error, but most starts resulted in the reported error while loading the cache file.
This behavior did not seem to present when performing an update repeatedly in a single thread. I assumed it was a race between reading/writing this cache file because of the non-determinism and it working in a single thread, but it's strange that this would suddenly stop reproducing. When I opened the PR, I tested the race with/without the locking and it succeeded/failed as expected.
My only only other guess is that this behavior was caused by intermittent bad responses from https://data.iana.org/TLD/tlds-alpha-by-domain.txt
that weren't being caught, but I unfortunately don't have response data from when I was encountering this bug to confirm that.
Having dug a little more, the race just has a fairly tight window given the small size of the file and the default buffer size of 8192 bytes. I'm not sure why it suddenly became likely to happen for me last week. This diff on master will help reproduce, but in the absence of a race should have had no effect other than speed:
diff --git a/urlextract/cachefile.py b/urlextract/cachefile.py
index 5a59d58..6de41b0 100644
--- a/urlextract/cachefile.py
+++ b/urlextract/cachefile.py
@@ -181,7 +181,9 @@ class CacheFile:
try:
with urllib.request.urlopen(req) as f:
page = f.read().decode('utf-8')
- ftld.write(page)
+ for i in page:
+ ftld.write(i)
+ ftld.flush()
except HTTPError as e:
self._logger.error("ERROR: Can not download list ot TLDs. "
"(HTTPError: {})".format(e.reason))
With a single thread, this executes fine, but with multiple threads, it fails with a variety of errors.
In addition to the exceptions, the much more common case is it returning an invalid TLD list. My guess is that, just by chance, the default buffer size caused the file last week to get cut off in a place that would cause an exception. This silent-failure behavior can be reproduced without the above diff:
from concurrent.futures import ThreadPoolExecutor
from urlextract import URLExtract
t = ThreadPoolExecutor(max_workers=8)
def test():
a = URLExtract()
a.update()
print('Number of TLDs: %d' % len(a._load_cached_tlds()))
for i in [t.submit(test) for _ in range(100)]:
b = i.result()
Appreciate your work on this library! When using
urlextract
in a concurrent environment, I encountered a race condition when loading the cache file. As a simple test case:In my environment, this results in:
I'm not sure if
urlextract
is meant to be concurrency-safe, but if so, maybe using something like filelock would be appropriate here.