lipoja / URLExtract

URLExtract is python class for collecting (extracting) URLs from given text based on locating TLD.
MIT License
245 stars 61 forks source link

Race condition in cache loading #55

Closed supernothing closed 4 years ago

supernothing commented 4 years ago

Appreciate your work on this library! When using urlextract in a concurrent environment, I encountered a race condition when loading the cache file. As a simple test case:

from concurrent.futures import ThreadPoolExecutor
from urlextract import URLExtract

t = ThreadPoolExecutor(max_workers=8)

def test():
    a = URLExtract()
    a.update()

for i in [t.submit(test) for _ in range(100)]:
    print(i.result())

In my environment, this results in:

~/race.py in test()
      8 def test(): 
      9     a = URLExtract()
---> 10     a.update()
     11                                       
     12 for i in [t.submit(test) for _ in range(100)]:

~/venv/lib/python3.6/site-packages/urlextract/urlextract_core.py in update(self)
    146             return False
    147                                          
--> 148         self._reload_tlds_from_file()
    149
    150         return True

~/venv/lib/python3.6/site-packages/urlextract/urlextract_core.py in _reload_tlds_from_file(self)
    114         """
    115
--> 116         tlds = sorted(self._load_cached_tlds(), key=len, reverse=True)
    117         re_escaped = [re.escape(str(tld)) for tld in tlds]
    118         self._tlds_re = re.compile('|'.join(re_escaped))

~/venv/lib/python3.6/site-packages/urlextract/cachefile.py in _load_cached_tlds(self)
    216
    217                 set_of_tlds.add("." + tld)
--> 218                 set_of_tlds.add("." + idna.decode(tld))
    219
    220         return set_of_tlds

~/venv/lib/python3.6/site-packages/idna/core.py in decode(s, strict, uts46, std3_rules)
    390         trailing_dot = True
    391     for label in labels:
--> 392         s = ulabel(label)
    393         if s:
    394             result.append(s)

~/venv/lib/python3.6/site-packages/idna/core.py in ulabel(label)
    309
    310     label = label.decode('punycode')
--> 311     check_label(label)
    312     return label
    313

~/venv/lib/python3.6/site-packages/idna/core.py in check_label(label)
    237         label = label.decode('utf-8')
    238     if len(label) == 0:
--> 239         raise IDNAError('Empty Label')
    240
    241     check_nfc(label)

I'm not sure if urlextract is meant to be concurrency-safe, but if so, maybe using something like filelock would be appropriate here.

lipoja commented 4 years ago

Thanks for reporting this issue. I really appreciate your time spending on this report and also on the fix. I will have deeper look on the PR this evening or tomorrow morning.

Thank you!

lipoja commented 4 years ago

I am not able to reproduce it now. I will try that tomorrow on another machine.

supernothing commented 4 years ago

Interesting, I'm not able to reproduce locally now either. Apologies for the bad reproducer.

Perhaps I should add more details on this issue: I started encountering it about a week ago in a service that loads and instantiates this module in multiple processes, similar to the non-working reproduction script. This code had worked previously, but had recently increased the number of processes. I could occasionally get the service to start without error, but most starts resulted in the reported error while loading the cache file.

This behavior did not seem to present when performing an update repeatedly in a single thread. I assumed it was a race between reading/writing this cache file because of the non-determinism and it working in a single thread, but it's strange that this would suddenly stop reproducing. When I opened the PR, I tested the race with/without the locking and it succeeded/failed as expected.

My only only other guess is that this behavior was caused by intermittent bad responses from https://data.iana.org/TLD/tlds-alpha-by-domain.txt that weren't being caught, but I unfortunately don't have response data from when I was encountering this bug to confirm that.

supernothing commented 4 years ago

Having dug a little more, the race just has a fairly tight window given the small size of the file and the default buffer size of 8192 bytes. I'm not sure why it suddenly became likely to happen for me last week. This diff on master will help reproduce, but in the absence of a race should have had no effect other than speed:

diff --git a/urlextract/cachefile.py b/urlextract/cachefile.py
index 5a59d58..6de41b0 100644
--- a/urlextract/cachefile.py
+++ b/urlextract/cachefile.py
@@ -181,7 +181,9 @@ class CacheFile:
             try:
                 with urllib.request.urlopen(req) as f:
                     page = f.read().decode('utf-8')
-                    ftld.write(page)
+                    for i in page:
+                        ftld.write(i)
+                        ftld.flush()
             except HTTPError as e:
                 self._logger.error("ERROR: Can not download list ot TLDs. "
                                    "(HTTPError: {})".format(e.reason))

With a single thread, this executes fine, but with multiple threads, it fails with a variety of errors.

supernothing commented 4 years ago

In addition to the exceptions, the much more common case is it returning an invalid TLD list. My guess is that, just by chance, the default buffer size caused the file last week to get cut off in a place that would cause an exception. This silent-failure behavior can be reproduced without the above diff:

from concurrent.futures import ThreadPoolExecutor
from urlextract import URLExtract

t = ThreadPoolExecutor(max_workers=8)

def test():
    a = URLExtract()
    a.update()
    print('Number of TLDs: %d' % len(a._load_cached_tlds()))

for i in [t.submit(test) for _ in range(100)]:
    b = i.result()