eliasdabbas / advertools

advertools - online marketing productivity and analysis tools
https://advertools.readthedocs.io
MIT License
1.15k stars 213 forks source link

opening .jl file command doesn't show 'my_output_file.jl' #250

Closed reconrad48 closed 1 year ago

reconrad48 commented 1 year ago

My apologies in advance, I am still a python padawan. I swept through the documentation but couldn't find an answer.

I am assuming that after i run;

import advertools as adv
adv.crawl('https://example.com', 'my_output_file.jl', follow_links=True)

and then

import pandas as pd
crawl_df = pd.read_json('my_output_file.jl', lines=True)

I am supposed to see the data from the scrape? (I did put in a real URL, and the output file is a 1.6MB .jl file was created. But when I run that cell, nothing happens. No errors but no data either. I am testing this in a jupyter notebook... all requirements are installed etc.

Also, if I may, as an SEO practitioner, how do I output these results into a CSV file that can viewed in excel or google sheets for further analysis? If you're willing, can you provide an example command as to how to convert the .jl file to .csv? I tried to install json-lines but apparently it's no longer supported. I assume what I need is to import csv but as far as how to structure it to encapsulate the data with column headers etc is a bit intimidating.

Thanks

eliasdabbas commented 1 year ago

For the second question:

how do I output these results into a CSV file that can viewed in excel or google sheets

Once you have a pandas DataFrame, you can easily run the .to_csv method and convert it to a CSV file. There are many other converters as well. So, once you have a crawl_df you can run:

crawl_df.to_csv('my_crawl_file.csv', index=False)

For the first question can you please share the exact code that you are running, the URL you are crawling, so I can check?

reconrad48 commented 1 year ago

Thanks Elias, but I ran the code exactly how it is shown above only with a real website URL. The only previous command was !pip3 install advertools. But I am beginning to think that it isn't going to matter if I will need to create a panda complex dataframe. That's mind-boggling really, that by default the SEO Crawler pulls data (the default setting) into a json lines file that is basically unusable until I build a highly complex dataframe matching columns to data... Perhaps some programmers might find that easy to do but I find that as mind-numbing torture. Of course it has nothing do with your tools, it's just spacy and pandas really.

eliasdabbas commented 1 year ago

The read_json function reads the jsonlines file and converts it to a DataFrame. It's all handled by default, so there is nothing to worry about. Once you have a DataFrame, you can easily convert it to another file format if needed.

Can you please try running the code on one of the online notebook services like Google Colab or Kaggle and let me know what happens?

If you do so and still have issues, please share access to the notebook, and I'll look into it.

eliasdabbas commented 1 year ago

@reconrad48 Did you try this? Did it work?

reconrad48 commented 1 year ago

Sorry, haven't gotten back around to it yet... I will work on it in the next few days. I'll run a few questions through GPTChat to see if can help expedite my learning lol.

reconrad48 commented 1 year ago

This is what GPTChat said I should do lol.. does it look right?

https://ibb.co/F37dG8b

eliasdabbas commented 1 year ago

At least we have some time to do some work before GPT takes over everything hahahaha...

Yes, it's correct, but there's no need for the orient parameter.

Try it on one of the online platforms and feel free to re-open the issue if you still have problems.