Open jeffsnack opened 1 year ago
Also experiencing this, only downloaded very recently and the issue started on my first request, so I don't think it's a frequency thing.
Have you tried this?
Have you tried this?
Yes,I tried it
I updated the package and it worked yesterday.
But it can't scrape anything today,i don't know what's going on...
Same thing happening to me. I've been using the scraper for about a year and was all good until a few days ago.
Some details of my scraping job:
I was using v0.2.48
and have now updated it to v0.2.58
. Same thing happening in both versions.
I am also getting the UnexpectedResponse: Your request couldn't be processed
message. I pull posts from the same group of public pages about every two weeks using cookies. Currently using version 0.2.56
. Also, when I try to get the page info for any page, I'm not getting anything back anymore (example):
print(get_page_info("Nintendo"))
{'reviews': <generator object FacebookScraper.get_page_reviews at 0x00000282339A9CF0>}
even for pages I know I was previously getting results for, and which I have verified are still up and not private.
Yep, same issue here. Was grabbing posts from public pages. Worked for almost a year but it's busted now. Guess Facebook caught on.
Try the latest master branch
Thanks @neon-ninja - However after updating I am now getting the following response:
Facebook says 'Unsupported Browser'
That's just a warning, not an error
I tried v0.2.59
and works like a charm 🥳 Thank you!
I thought this was fixed for me but it's back. I'm on the latest code btw.
Here's my error:
sys:1: UserWarning: A low page limit (<=2) might return no results, try increasing the limit
Traceback (most recent call last):
File "/mnt/storage/Dropbox/Apps/Instamemes/instameme.py", line 35, in <module>
for post in get_posts(pageID, pages=1, cookies='/mnt/storage/Dropbox/Apps/Instamemes/cookies.txt'):
File "/home/tim/.local/lib/python3.9/site-packages/facebook_scraper/facebook_scraper.py", line 1114, in _generic_get_posts
for i, page in zip(counter, iter_pages_fn()):
File "/home/tim/.local/lib/python3.9/site-packages/facebook_scraper/page_iterators.py", line 87, in generic_iter_pages
response = request_fn(next_url)
File "/home/tim/.local/lib/python3.9/site-packages/facebook_scraper/facebook_scraper.py", line 927, in get
raise exceptions.UnexpectedResponse("Your request couldn't be processed")
facebook_scraper.exceptions.UnexpectedResponse: Your request couldn't be processed
And here is my code:
import json
import time
from facebook_scraper import get_posts, set_user_agent
set_user_agent("Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36")
pages = [
'205332729555617'
,'195392310760'
]
for pageID in pages:
for post in get_posts(pageID, pages=1, cookies='/mnt/storage/Dropbox/Apps/Instamemes/cookies.txt'):
print(json.dumps(post, indent=4, sort_keys=True, default=str))
# avoid getting banned
time.sleep( 2 )
Any ideas?
Thanks!
I thought this was fixed for me but it's back. I'm on the latest code btw.
Here's my error:
sys:1: UserWarning: A low page limit (<=2) might return no results, try increasing the limit Traceback (most recent call last): File "/mnt/storage/Dropbox/Apps/Instamemes/instameme.py", line 35, in <module> for post in get_posts(pageID, pages=1, cookies='/mnt/storage/Dropbox/Apps/Instamemes/cookies.txt'): File "/home/tim/.local/lib/python3.9/site-packages/facebook_scraper/facebook_scraper.py", line 1114, in _generic_get_posts for i, page in zip(counter, iter_pages_fn()): File "/home/tim/.local/lib/python3.9/site-packages/facebook_scraper/page_iterators.py", line 87, in generic_iter_pages response = request_fn(next_url) File "/home/tim/.local/lib/python3.9/site-packages/facebook_scraper/facebook_scraper.py", line 927, in get raise exceptions.UnexpectedResponse("Your request couldn't be processed") facebook_scraper.exceptions.UnexpectedResponse: Your request couldn't be processed
And here is my code:
import json import time from facebook_scraper import get_posts, set_user_agent set_user_agent("Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36") pages = [ '205332729555617' ,'195392310760' ] for pageID in pages: for post in get_posts(pageID, pages=1, cookies='/mnt/storage/Dropbox/Apps/Instamemes/cookies.txt'): print(json.dumps(post, indent=4, sort_keys=True, default=str)) # avoid getting banned time.sleep( 2 )
Any ideas?
Thanks!
Maybe you have make your page over 2,I think it can work.
@jeffsnack Yeah - that's what fixed it - Never had to have more than one page in the past but glad it's working now
Here is my code,and error massage
I think maybe request frequently,so fb stop me a few time?
Any suggestion?