UltimaHoarder / UltimaScraper

Scrape all the media from an OnlyFans account - Updated regularly
GNU General Public License v3.0
3.96k stars 605 forks source link

Session Lock | Please refresh the page error #1078

Closed Macmasteri closed 3 years ago

Macmasteri commented 3 years ago

Running the latest version and got this error;

Auth (V1) Attempt 1/10 Please refresh the page Auth (V1) Attempt 2/10 Please refresh the page Auth (V1) Attempt 3/10 Please refresh the page Auth (V1) Attempt 4/10 Please refresh the page Auth (V1) Attempt 5/10 Please refresh the page Auth (V1) Attempt 6/10 Please refresh the page Auth (V1) Attempt 7/10 Please refresh the page Auth (V1) Attempt 8/10 Please refresh the page Auth (V1) Attempt 9/10 Please refresh the page Auth (V1) Attempt 10/10 Please refresh the page

NytroDev commented 3 years ago

tried porting to c# but still returning a 400 error. can anyone point out the error? cause i see nothing wrong with the code image image

LunarPenguin commented 3 years ago

Congrats to everyone working on this 🎉

valdearg commented 3 years ago

Tried the new version, does everyone else not get all their models? I only get 4 out of about 14.

techbot2342 commented 3 years ago

So I'm a newbie and just got this set up this morning and just to encounter this error but I don't know what this fix is.

LunarPenguin commented 3 years ago

update to the newest version, it just got fixed 5 minutes ago

ghost commented 3 years ago

Tried the new version, does everyone else not get all their models? I only get 4 out of about 14.

same for me I only get 10 of 30

pzrd commented 3 years ago

Tried the new version, does everyone else not get all their models? I only get 4 out of about 14.

Same here. Only got 16 when I'm subbed to more than 30. Worst of all, one of my subs expires tomorrow, but that chick isn't in the list that's getting scraped. Oof

ghost commented 3 years ago

@pzrd

Maybe check if you can add the ones in the list to the black list to see if it will pull different models. I'm currently still downloading would check on my own.

Edit:

Tried it, did not work.

UltimaHoarder commented 3 years ago

The latest commit should work, thanks everyone who helped along the way with ideas- now we can get back to scraping🥰 Congrats on the reward @hippothon 😏 and thanks for the ez PR @kotori2 ❤️

It's kinda cringe that they decided to launch it on May 1st (start of the month) - like the update was going to change anything lmao. It was only a matter of time before the community RE'ed it. We'll always prevail https://github.com/DIGITALCRIMINAL/OnlyFans/issues/341 - just like last time.

Honestly, I don't know why the devs bother. They've tried numerous times to block browser extensions by using keyword filters, it really was a pitiful attempt to try win a battle on the front end. They've even sent out DMCA claims to the developers, majority of them laughed it off and kept coding. I'm guessing their manager was pressuring them to find solutions to their losses.

lmao.

CheeseburgerWithFries commented 3 years ago

I just updated to the new master version and I'm only getting 2 of 22.

RonnieBlaze commented 3 years ago

its deff not scraping all of my models.. its says i have 35 in my list and i know i am closer to 80 models that i scrape.

garrett1415 commented 3 years ago

Okay. I renamed the naomie folder and downloaded the original from OF again. This time it gave me 313 pics and 98 vids. I didn't make any changes to settings. Anyone else having this issue?

Having the same issue as you. Ran the scraper twice and got less than 10 photos/videos. The model has 100+ of each.

ghost commented 3 years ago

Thanks to everyone who has joined and worked together to get this fixed, you are all seriously smart people out there!

I'm also having the same problem when trying to scrape, it's not downloading all pictures/videos

NovaResonance commented 3 years ago

For me, it's not finding any models so far, I get this error instead (I updated both the folder and the session details)

Scraping Paid Content Scraping Subscriptions There's nothing to scrape. Archive Completed in 0.03 Minutes Pausing scraper for 0 seconds.

Searched this error in the previous issues and none of the previous fixes worked. Great work so far though, looks like a doozy to fix.

LunarPenguin commented 3 years ago

i dont know if you already tried this but maybe try disabling paid content scraping in the config ("scrape_paid_content": false). paid content should still be scraped by the normal scraping behavior anyways, it just doesn't scrape it at the start anymore.

KernelPanicNT commented 3 years ago

Same behaviour here, partial scrapping after update ouch!

RonnieBlaze commented 3 years ago

here is what i have noticed on my end when running the script.

It logs in fine. it scrapes but only about 50% of the models it should be. it comes back with 35 of about ~85 models0 i told it to scrape a model that i know just posted a photo, it scrapes but does not download anything.

pzrd commented 3 years ago

i dont know if you already tried this but maybe try disabling paid content scraping in the config ("scrape_paid_content": false). paid content should still be scraped by the normal scraping behavior anyways, it just doesn't scrape it at the start anymore.

This worked for me the first time I ran the script after changing it to scrape paid content to false, but every subsequent scrape after the first one reverted back to the same limited scrape. It still only did a partial scrape sadly: 1180/2048 photos and 67/110 videos.

oftrash commented 3 years ago

If this helps.

fyi on the 27 of last month Onlyfans changed up their api to set a max request limit. They also changed up the limit param to always be 10 n0 matter what you pass into it. use to be able to max it up to 100

if you make too many requests you get back a header of status code 429

LunarPenguin commented 3 years ago

if you make too many requests you get back a header of status code 429

thats a cloudflare request code, based on the configured rate limit response this might be able to be bypassed by something like cloudscraper

DonaldTPP commented 3 years ago

For some reason I'm still getting Access Denied for the first error and then Session Locked the other 9. Logs me out of the browser with the 15 second wait screen in between.

RonnieBlaze commented 3 years ago

just changed "max_threads": from 4 to 1 in the config and that seems to have scraped all 85 of the models that i follow, when it was set to 4 it got 35.

UltimaHoarder commented 3 years ago

I'm scraping with 32 threads with a limit param of 100 and I rarely get those errors. Only time I was rate limited was when I was using a proxy which is why I added this commit.

https://github.com/DIGITALCRIMINAL/OnlyFans/commit/3dd1b0b3d8e995c5f438b13186e5cca408c9699c

iamnotdev commented 3 years ago

Hey @DIGITALCRIMINAL. I'm failing to create a sign for this path https://onlyfans.com/api2/v2/streams/feed?limit=10. Can you check if the new found fix works for this path?

ghost commented 3 years ago

Can confirm if max_threads is set to -1 I only see 10 models. With 32 it's the same. If I set it to 1 I see all the models, currently downloading all to see if it gets all pictures and videos.

ghost commented 3 years ago

Updated to the newest version but still getting this same error, any thoughts?

ghost commented 3 years ago

@formulabaritone

Did you set "x_bc" in your auth.json?

UltimaHoarder commented 3 years ago

To touch on x_bc, before this update, we didn't need to include it. We only needed to include if we were using a browser but now it's required for API requests. You can leave x_bc blank if you want and requests should still work.

UltimaHoarder commented 3 years ago

Hey @DIGITALCRIMINAL. I'm failing to create a sign for this path https://onlyfans.com/api2/v2/streams/feed?limit=10. Can you check if the new found fix works for this path?

It should work for all paths that get submitted to your create_sign function

DonaldTPP commented 3 years ago

Maybe I'm having a dumb moment but I cannot get it to work. Remotely and locally. Once I add the data needed to login the first Auth gives me a "Access Denied" then the other 9 are still saying "Session Locked" and then logs me out of everything including the browser and then takes me to the 15 second refreshing page.

UltimaHoarder commented 3 years ago

Maybe I'm having a dumb moment but I cannot get it to work. Remotely and locally. Once I add the data needed to login the first Auth gives me a "Access Denied" then the other 9 are still saying "Session Locked" and then logs me out of everything including the browser and then takes me to the 15 second refreshing page.

Do you have 2FA?

RonnieBlaze commented 3 years ago

I'm scraping with 32 threads with a limit param of 100 and I rarely get those errors. Only time I was rate limited was when I was using a proxy which is why I added this commit.

3dd1b0b

can confirm when i change the threads to anything but 1 i do not get a full scrape of the 85 models. 4 threads got 35 of 85 models 2 threads got 69 of 85 models 1 thread gets all 85 models

any idea why you are able to use 32 threads?

iamnotdev commented 3 years ago

Hey @DIGITALCRIMINAL. I'm failing to create a sign for this path https://onlyfans.com/api2/v2/streams/feed?limit=10. Can you check if the new found fix works for this path?

It should work for all paths that get submitted to your create_sign function

I found my mistake, I was submitting "0" instead of user id (see this line https://github.com/DIGITALCRIMINAL/OnlyFans/blob/master/apis/onlyfans/onlyfans.py#L28)

UltimaHoarder commented 3 years ago

Hey @DIGITALCRIMINAL. I'm failing to create a sign for this path https://onlyfans.com/api2/v2/streams/feed?limit=10. Can you check if the new found fix works for this path?

It should work for all paths that get submitted to your create_sign function

I found my mistake, I was submitting "0" instead of user id (see this line https://github.com/DIGITALCRIMINAL/OnlyFans/blob/master/apis/onlyfans/onlyfans.py#L28)

Ahh okay, thanks for telling us. I'll change that. Might fix some problems.

DonaldTPP commented 3 years ago

Maybe I'm having a dumb moment but I cannot get it to work. Remotely and locally. Once I add the data needed to login the first Auth gives me a "Access Denied" then the other 9 are still saying "Session Locked" and then logs me out of everything including the browser and then takes me to the 15 second refreshing page.

Do you have 2FA?

No. Nor do I use a proxy for my scrapes. I'm going to try something else and I will let you know if it lets me in.

ImpalaPUA commented 3 years ago

line 41, in <module> site_name = site_names[x] IndexError: list index out of range

This may or may not be related to the ongoing issue, but I farmed free trials and am subscribed to ~1000 OF's

After it scrapes them all, I type in 12 for the desired OF page, and it prints that error

UltimaHoarder commented 3 years ago

line 41, in <module> site_name = site_names[x] IndexError: list index out of range

This may or may not be related to the ongoing issue, but I farmed free trials and am subscribed to ~1000 OF's

After it scrapes them all, I type in 12 for the desired OF page, and it prints that error

Fixing something similar atm

DonaldTPP commented 3 years ago

Okay, I changed my browser from Firefox to Chrome, and it's let me in. I've tried different thread variables but can only get all 140 models when it's set to "1".

Now testing to see if it gets all the content or just bits and pieces like people have reported.

Edit: Can confirm it is downloading everything that is supposed to be downloading but it is stalling on some models here and there. Not going past Type: Posts Scrape Attempt: 1/100

DonaldTPP commented 3 years ago

line 41, in <module> site_name = site_names[x] IndexError: list index out of range

This may or may not be related to the ongoing issue, but I farmed free trials and am subscribed to ~1000 OF's

After it scrapes them all, I type in 12 for the desired OF page, and it prints that error

Some advice in future; Make another OF account that you have paid subscriptions on and another however many you will need to redeem free OF pages on. It makes things a little easier but not as ideal as having them all in one account.

mwald84 commented 3 years ago

I'm not sure if I'm having the same issue. I get a Session lock error. I updated to the current Master (I was using the release before) but this is the outpt:

Auth (V1) Attempt 1/10
Session lock
Auth (V1) Attempt 2/10
Session lock
Auth (V1) Attempt 3/10
Session lock
Auth (V1) Attempt 4/10
Session lock
Auth (V1) Attempt 5/10
Session lock
Auth (V1) Attempt 6/10
Session lock
Auth (V1) Attempt 7/10
Session lock
Auth (V1) Attempt 8/10
Session lock
Auth (V1) Attempt 9/10
Session lock
Auth (V1) Attempt 10/10
Session lock
Scraping Paid Content
Scraping Subscriptions
There's nothing to scrape.
Archive Completed in 0.03 Minutes
Now exiting.

I also updated the auth.json and set max_threads to 1.

Also, after trying a few times, I get kicked of OF on my browser and have to log in again.

What else can I try?

Thanks

Mark

btkador commented 3 years ago

@DIGITALCRIMINAL curios what's that # Users: 300000 | Creators: 301000 mean?

porting the sign part to golang now

UltimaHoarder commented 3 years ago

I'm not sure if I'm having the same issue. I get a Session lock error. I updated to the current Master (I was using the release before) but this is the outpt:

Auth (V1) Attempt 1/10
Session lock
Auth (V1) Attempt 2/10
Session lock
Auth (V1) Attempt 3/10
Session lock
Auth (V1) Attempt 4/10
Session lock
Auth (V1) Attempt 5/10
Session lock
Auth (V1) Attempt 6/10
Session lock
Auth (V1) Attempt 7/10
Session lock
Auth (V1) Attempt 8/10
Session lock
Auth (V1) Attempt 9/10
Session lock
Auth (V1) Attempt 10/10
Session lock
Scraping Paid Content
Scraping Subscriptions
There's nothing to scrape.
Archive Completed in 0.03 Minutes
Now exiting.

I also updated the auth.json and set max_threads to 1.

Also, after trying a few times, I get kicked of OF on my browser and have to log in again.

What else can I try?

Thanks

Mark

Try clearing all sessions from your account. You wouldn't happen to be a creator?

@DIGITALCRIMINAL curios what's that # Users: 300000 | Creators: 301000 mean?

porting the sign part to golang now

It's outdated now, but it had something to do with time multiplied by x. There was an issue where the user int wouldn't work for creators.... but the creators int worked for both types of users.

mwald84 commented 3 years ago

Try clearing all sessions from your account. You wouldn't happen to be a creator?

how/where do I clean all sessions?

and yes, I'm a creator

more info: when I get kicked out, Session lock changes to Access denied and I need to update sess in config. then it's back to Session lock. And when I refresh after I get the Session lock, I get the We'll try your destination again in 15 seconds message and it goes back to login page.

I'm afraid I'm gonna get banned soon lol

RonnieBlaze commented 3 years ago

Try clearing all sessions from your account. You wouldn't happen to be a creator?

how/where do I clean all sessions?

and yes, I'm a creator

more info: when I get kicked out, Session lock changes to Access denied and I need to update sess in config. then it's back to Session lock. And when I refresh after I get the Session lock, I get the We'll try your destination again in 15 seconds message and it goes back to login page.

I'm afraid I'm gonna get banned soon lol

make a 2nd account for testing so you dont run the risk of you main account getting banned?

nappyapp commented 3 years ago

Is getting banned a risk with these scrapers? Like have they detected and banned people for it before?

On Sat, May 1, 2021, 7:26 PM RonnieBlaze @.***> wrote:

Try clearing all sessions from your account. You wouldn't happen to be a creator?

how/where do I clean all sessions?

and yes, I'm a creator

more info: when I get kicked out, Session lock changes to Access denied and I need to update sess in config. then it's back to Session lock. And when I refresh after I get the Session lock, I get the We'll try your destination again in 15 seconds message and it goes back to login page.

I'm afraid I'm gonna get banned soon lol

make a 2nd account for testing so you dont run the risk of you main account getting banned?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/DIGITALCRIMINAL/OnlyFans/issues/1078#issuecomment-830728928, or unsubscribe https://github.com/notifications/unsubscribe-auth/AS4PW6FH35ICDYEIQKN7EB3TLSZ4ZANCNFSM435INWTQ .

Optimusprimeums commented 3 years ago

i dont know if you already tried this but maybe try disabling paid content scraping in the config ("scrape_paid_content": false). paid content should still be scraped by the normal scraping behavior anyways, it just doesn't scrape it at the start anymore.

This worked for me the first time I ran the script after changing it to scrape paid content to false, but every subsequent scrape after the first one reverted back to the same limited scrape. It still only did a partial scrape sadly: 1180/2048 photos and 67/110 videos.

Also seeing this issue. On the site I can see 352 Videos but only download 75. Images are the same. Any reason why this is the case? Can confirm when I change max_threads = 1 I can get all of the content

ghost commented 3 years ago

@formulabaritone

Did you set "x_bc" in your auth.json?

not entirely sure how to do this since it didn't work when i tried. i'm very clueless about this stuff so if anyone could offer a foolproof explanation

ghost commented 3 years ago

I changed max_threads to 1 like others have mentioned, and it did work and downloaded all of the pictures/videos for the model I was scraping. But now on another account I have which is subscribed to around 100 people, some paid and some free, when I try to scrape an individual model I put the number in for that person, I hit scrape everything and then it starts scraping every model that the person is subscribed too..does anyone know why that is happening? I don't understand it.

ghost commented 3 years ago

@DIGITALCRIMINAL Sir I wanted to ask that issue got fixed?

bobsage123 commented 3 years ago

just changed "max_threads": from 4 to 1 in the config and that seems to have scraped all 85 of the models that i follow, when it was set to 4 it got 35.

I don't understand. I thought the issue is people can't auth in in the first place? Like I'm having. This didn't help since I can't auth in.