Closed reconrad48 closed 1 year ago
For the second question:
how do I output these results into a CSV file that can viewed in excel or google sheets
Once you have a pandas DataFrame, you can easily run the .to_csv
method and convert it to a CSV file. There are many other converters as well. So, once you have a crawl_df
you can run:
crawl_df.to_csv('my_crawl_file.csv', index=False)
For the first question can you please share the exact code that you are running, the URL you are crawling, so I can check?
Thanks Elias, but I ran the code exactly how it is shown above only with a real website URL. The only previous command was !pip3 install advertools. But I am beginning to think that it isn't going to matter if I will need to create a panda complex dataframe. That's mind-boggling really, that by default the SEO Crawler pulls data (the default setting) into a json lines file that is basically unusable until I build a highly complex dataframe matching columns to data... Perhaps some programmers might find that easy to do but I find that as mind-numbing torture. Of course it has nothing do with your tools, it's just spacy and pandas really.
The read_json
function reads the jsonlines file and converts it to a DataFrame. It's all handled by default, so there is nothing to worry about.
Once you have a DataFrame, you can easily convert it to another file format if needed.
Can you please try running the code on one of the online notebook services like Google Colab or Kaggle and let me know what happens?
If you do so and still have issues, please share access to the notebook, and I'll look into it.
@reconrad48 Did you try this? Did it work?
Sorry, haven't gotten back around to it yet... I will work on it in the next few days. I'll run a few questions through GPTChat to see if can help expedite my learning lol.
This is what GPTChat said I should do lol.. does it look right?
At least we have some time to do some work before GPT takes over everything hahahaha...
Yes, it's correct, but there's no need for the orient
parameter.
Try it on one of the online platforms and feel free to re-open the issue if you still have problems.
My apologies in advance, I am still a python padawan. I swept through the documentation but couldn't find an answer.
I am assuming that after i run;
and then
I am supposed to see the data from the scrape? (I did put in a real URL, and the output file is a 1.6MB .jl file was created. But when I run that cell, nothing happens. No errors but no data either. I am testing this in a jupyter notebook... all requirements are installed etc.
Also, if I may, as an SEO practitioner, how do I output these results into a CSV file that can viewed in excel or google sheets for further analysis? If you're willing, can you provide an example command as to how to convert the .jl file to .csv? I tried to install json-lines but apparently it's no longer supported. I assume what I need is to import csv but as far as how to structure it to encapsulate the data with column headers etc is a bit intimidating.
Thanks