Closed captainGeech42 closed 3 years ago
I was talking with @cdong1012, I think this bug is some issue on the webhook notifications where the stack trace was too long or had an invalid character or something, rather than a bug with the REvil scraping. We'll see
I recreated the bug and found that this only happens when an exception is raised during the scrape_victims function is called (it can be any exception). In REvil case, sometime they timeout and we get the RemoteDisconnected exception.
Once the exception occurs, the error is logged and we continue to another site (in the try/except block in ransomwatch.main). Occasionally, when the site's session is not properly closed during an exception and another site starts their session immediately, this bug occurs.
To recreate, simply add this to the scrape_victims function.
raise Exception("Some exception")
You might have to run it a few times to see the bug.
To fix this, close the site's session immediately when the session occurs in ransomwatch.main
try:
s.scrape_victims()
except:
logging.error(f"Got an error while scraping {site.actor}, notifying")
tb = traceback.format_exc()
# send error notifications
NotificationManager.send_error_notification(f"{site.actor} scraping", tb)
# log exception
logging.error(tb.strip()) # there is a trailing newline
# clean up site
s.session.close()
# skip the rest of the site since the data may be messed up
continue
NOTE: Should we close the session at the end of the for loop too? It doesn't cause any problem right now, but it's probably a good idea to clean up the site after we are done with it.
Good catch! I am sqlalchemy noob, whoops.
Closing the session makes sense, I'll add that fix in.
not sure what's going on. similar error w/ slack