The goal is to write data during runtime in the same way that it is currently done for the 2 scrape_emails_and_social_media.py files. This is useful for the events of errors during runtime you don't lose all your results. Also, sometimes memory runs out when you are scraping huge websites so it is important to have your findings documented.
Developer to take this should model after the current implementation. If issue #1 is completed, then it will be just a matter of importing the needed modules.
The goal is to write data during runtime in the same way that it is currently done for the 2
scrape_emails_and_social_media.py
files. This is useful for the events of errors during runtime you don't lose all your results. Also, sometimes memory runs out when you are scraping huge websites so it is important to have your findings documented.Developer to take this should model after the current implementation. If issue #1 is completed, then it will be just a matter of importing the needed modules.