kevinzg / facebook-scraper

Scrape Facebook public pages without an API key
MIT License
2.36k stars 627 forks source link

Getting Blank CSV file #997

Open urooj471 opened 1 year ago

urooj471 commented 1 year ago

I am trying to scrape comments from a facebook page "SacheHaiSatyanarayan" and I am geeting blank csv file. But, when I am scraping post and comments form "TimesofIndia" page then I am getting proper output. What can be the issue?

import csv import time from facebook_scraper import FacebookScraper

Set the output file name and number of pages to scrape

output_file = 'output.csv' num_pages = 15

Set the waiting time between API calls (in seconds)

wait_time = 5

Create an instance of the FacebookScraper class

scraper = FacebookScraper()

Open the output file and write the header row

with open(output_file, 'w', encoding='utf-8', newline='') as file: writer = csv.writer(file) writer.writerow(['post_id', 'post_text', 'commenter_name','commenter_text'])

# Iterate over the desired number of pages of posts
for page_num, post in enumerate(scraper.get_posts(account="SacheHaiSatyanarayan", pages=num_pages, options={"comments": True}), start=1):
    # Check if the comments_full key is present and is a list in the post object
    if 'comments_full' in post and isinstance(post['comments_full'], list):
        # Iterate over the comments for the current post and write to the output file
        for comment in post['comments_full']:
            writer.writerow([post['post_id'], post['text'], comment["commenter_name"],comment['comment_text']])

    # Wait before making the next API call to avoid being blocked
    time.sleep(wait_time)

    # Print progress information to the console
    print(f'Processed page {page_num}/{num_pages}')

I have given snapshot of my code.

neon-ninja commented 1 year ago

Try pass cookies as per the readme

urooj471 commented 1 year ago

I have passed cookies as per readme but still i am getting blank csv file. Could you please look into it.

import csv import time from facebook_scraper import FacebookScraper

Set the output file name and number of pages to scrape

output_file = 'output.csv' num_pages = 15

Set the waiting time between API calls (in seconds)

wait_time = 5

Create an instance of the FacebookScraper class

scraper = FacebookScraper()

Open the output file and write the header row

with open(output_file, 'w', encoding='utf-8', newline='') as file: writer = csv.writer(file) writer.writerow(['post_id', 'post_text', 'commenter_name','commenter_text'])

# Iterate over the desired number of pages of posts
for page_num, post in enumerate(scraper.get_posts(account="SacheHaiSatyanarayan", pages=num_pages, cookies="C:/Users/urooj/Downloads/cookies.txt",options={"comments": True}), start=1):
    # Check if the comments_full key is present and is a list in the post object
    if 'comments_full' in post and isinstance(post['comments_full'], list):
        # Iterate over the comments for the current post and write to the output file
        for comment in post['comments_full']:
            writer.writerow([post['post_id'], post['text'], comment["commenter_name"],comment['comment_text']])

    # Wait before making the next API call to avoid being blocked
    time.sleep(wait_time)

    # Print progress information to the console
    print(f'Processed page {page_num}/{num_pages}')