captainGeech42 / ransomwatch

Ransomware leak site monitoring
MIT License
307 stars 61 forks source link

Something broken with REvil #9

Closed captainGeech42 closed 3 years ago

captainGeech42 commented 3 years ago
app_1    | 2021/04/20 18:36:25 [ERROR] Got an error while scraping REvil, notifying
app_1    | 2021/04/20 18:36:25 [ERROR] Error sending Discord notification (400): {"embeds": ["0"]}
app_1    | 2021/04/20 18:36:25 [ERROR] Failed to send error notification to Discord guild "test-discord"
app_1    | 2021/04/20 18:36:25 [ERROR] Traceback (most recent call last):
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
app_1    |     httplib_response = self._make_request(
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 445, in _make_request
app_1    |     six.raise_from(e, None)
app_1    |   File "<string>", line 3, in raise_from
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 440, in _make_request
app_1    |     httplib_response = conn.getresponse()
app_1    |   File "/usr/local/lib/python3.9/http/client.py", line 1347, in getresponse
app_1    |     response.begin()
app_1    |   File "/usr/local/lib/python3.9/http/client.py", line 307, in begin
app_1    |     version, status, reason = self._read_status()
app_1    |   File "/usr/local/lib/python3.9/http/client.py", line 276, in _read_status
app_1    |     raise RemoteDisconnected("Remote end closed connection without"
app_1    | http.client.RemoteDisconnected: Remote end closed connection without response
app_1    |
app_1    | During handling of the above exception, another exception occurred:
app_1    |
app_1    | Traceback (most recent call last):
app_1    |   File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 439, in send
app_1    |     resp = conn.urlopen(
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 755, in urlopen
app_1    |     retries = retries.increment(
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/util/retry.py", line 532, in increment
app_1    |     raise six.reraise(type(error), error, _stacktrace)
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/packages/six.py", line 734, in reraise
app_1    |     raise value.with_traceback(tb)
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
app_1    |     httplib_response = self._make_request(
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 445, in _make_request
app_1    |     six.raise_from(e, None)
app_1    |   File "<string>", line 3, in raise_from
app_1    |   File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 440, in _make_request
app_1    |     httplib_response = conn.getresponse()
app_1    |   File "/usr/local/lib/python3.9/http/client.py", line 1347, in getresponse
app_1    |     response.begin()
app_1    |   File "/usr/local/lib/python3.9/http/client.py", line 307, in begin
app_1    |     version, status, reason = self._read_status()
app_1    |   File "/usr/local/lib/python3.9/http/client.py", line 276, in _read_status
app_1    |     raise RemoteDisconnected("Remote end closed connection without"
app_1    | urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
app_1    |
app_1    | During handling of the above exception, another exception occurred:
app_1    |
app_1    | Traceback (most recent call last):
app_1    |   File "/app/ransomwatch.py", line 52, in main
app_1    |     s.scrape_victims()
app_1    |   File "/app/sites/revil.py", line 62, in scrape_victims
app_1    |     r = p.get(f"{self.url}?page={i}", headers=self.headers)
app_1    |   File "/app/net/proxy.py", line 101, in get
app_1    |     return self.session.get(*args, **kwargs)
app_1    |   File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 555, in get
app_1    |     return self.request('GET', url, **kwargs)
app_1    |   File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 542, in request
app_1    |     resp = self.send(prep, **send_kwargs)
app_1    |   File "/usr/local/lib/python3.9/site-packages/requests/sessions.py", line 655, in send
app_1    |     r = adapter.send(request, **kwargs)
app_1    |   File "/usr/local/lib/python3.9/site-packages/requests/adapters.py", line 498, in send
app_1    |     raise ConnectionError(err, request=request)
app_1    | requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
app_1    | 2021/04/20 18:36:25 [INFO] Finished all sites, exiting

not sure what's going on. similar error w/ slack

captainGeech42 commented 3 years ago

I was talking with @cdong1012, I think this bug is some issue on the webhook notifications where the stack trace was too long or had an invalid character or something, rather than a bug with the REvil scraping. We'll see

cdong1012 commented 3 years ago

I recreated the bug and found that this only happens when an exception is raised during the scrape_victims function is called (it can be any exception). In REvil case, sometime they timeout and we get the RemoteDisconnected exception.

Once the exception occurs, the error is logged and we continue to another site (in the try/except block in ransomwatch.main). Occasionally, when the site's session is not properly closed during an exception and another site starts their session immediately, this bug occurs.

To recreate, simply add this to the scrape_victims function.

raise Exception("Some exception")

You might have to run it a few times to see the bug.

To fix this, close the site's session immediately when the session occurs in ransomwatch.main

        try:
            s.scrape_victims()
        except:
            logging.error(f"Got an error while scraping {site.actor}, notifying")

            tb = traceback.format_exc()

            # send error notifications
            NotificationManager.send_error_notification(f"{site.actor} scraping", tb)

            # log exception
            logging.error(tb.strip()) # there is a trailing newline

            # clean up site
            s.session.close()

            # skip the rest of the site since the data may be messed up
            continue

NOTE: Should we close the session at the end of the for loop too? It doesn't cause any problem right now, but it's probably a good idea to clean up the site after we are done with it.

captainGeech42 commented 3 years ago

Good catch! I am sqlalchemy noob, whoops.

Closing the session makes sense, I'll add that fix in.