Closed ValenteMendez closed 1 year ago
Hi @ValenteMendez
Thanks for spotting the bug. I am looking into it and I will make sure to keep you posted.
Hi @ValenteMendez
I just released the package with some new updates (funda_scraper==1.0.7). The previous problems should be solved. Please let me know if the issues remain.
Hi Chien, thank you very much for updating the code!
I downloaded the latest version and executed it. My parameters were the following:
scraper = FundaScraper(area="amsterdam", want_to="buy", find_past=False, n_pages=275)
Although the code extracts 4125 links, the CSV output consists of only 1615 rows of data. From those 1615 datapoints, around 97% of the postal codes are not from Amsterdam but rather from a different city.
Do you have any insights on what may be causing this?
Also, when running the code, I encountered an error:
requests.exceptions.SSLError: None: Max retries exceeded with url: /koop/[random listing].
I was able to fix this by adding a delay in the scrape_one_link method:
def scrape_one_link(self, link: str) -> List[str]: """Scrape all the features from one house item given a link."""