Closed dmthomson closed 1 year ago
Not at the moment. The whole thing is designed to write to disc incrementally, and that is very helpful for performance, not having to save/process huge data in memory.
I'm curious, why do you think this would be a good thing? What are you trying to achieve?
I am writing a frontend and would like to provide the status of the crawl as well as results. I am using FastAPI to trigger the crawl and was just wanting to avoid writing a bunch code to read the file.
I was also under the impression that the contents were written to the file all at once. But now I see thats not the case
Very interesting application. I'd love to know more.
would like to provide the status of the crawl
As the crawl is happening, a new line is added for each URL crawled. You can easily check for the number of lines of the output file, and display that to the user (X URLs crawled). If crawling in list mod you can also provide a percentage, and/or "X URLs remaining".
wanting to avoid writing a bunch code to read the file
This is can be done with a single pandas command read_json
. Or am I missing something?
yeah I am familiar with the pandas command read_json. I've used the tool before I was just looking to see if there was a way to stream things into a different place, like say Redis or Kafka topic. At the end of the day I need to persist the crawl data and return some of the infromation back to the client. I can definitely do this using the dataframe method and writing those to some persistent location.
This is can be done with a single pandas command read_json. Or am I missing something?
Yes, I will go that route where I read the lines of the file to check crawl status. The only other thing might be calculating the number of potential pages/lines.
I am taking a domain url as an argument so I have to create the list dynamically. Does the follow mode offer away to determine how many pages will get crawled or does advertools provide a good way to determine the pages I could put in the list? Perhaps crawling the sitemap would do.
If crawling in list mod you can also provide a percentage, and/or "X URLs remaining".
I am building a suite of SEO tools that focus around automation.
Very interesting application. I'd love to know more.
Cool. I think the XML sitemap can be downloaded quickly, and can provide a generally good estimate of how the big the site is, and estimated number of URLs. Of course there could be discrepancies, but most of the time, I think it can provide a good estimate.
Looking forward to seeing what you build!
Is it possible to add functionality so we don't have to write to disc before being able ot analyze the results?
Directly to a df or some other python object would be great!