DIGITALCRIMINAL / ArchivedUltimaScraper

Scrape content from OnlyFans and Fansly
GNU General Public License v3.0
943 stars 39 forks source link

Getting traceback when scraping. This behavior started today #516

Closed mediaburnwayne closed 1 year ago

mediaburnwayne commented 2 years ago

Scraping Subscriptions Scrape Processing Name: ofcreator Type: Stories Traceback (most recent call last): File "M:\OnlyFans\start_ofd.py", line 66, in asyncio.run(main()) File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run return loop.run_until_complete(main) File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete return future.result() File "M:\OnlyFans\start_ofd.py", line 52, in main api = await main_datascraper.start_datascraper(config, site_name) File "M:\OnlyFans\datascraper\main_datascraper.py", line 131, in start_datascraper await default(datascraper) File "M:\OnlyFans\datascraper\main_datascraper.py", line 101, in default await main_helper.process_jobs(datascraper, subscription_list, site_settings) File "M:\OnlyFans\helpers\main_helper.py", line 1040, in process_jobs await datascraper.start_datascraper(authed, subscription.username) File "M:\OnlyFans\modules\module_streamliner.py", line 84, in start_datascraper await self.prepare_scraper(subscription, content_type) File "M:\OnlyFans\modules\module_streamliner.py", line 232, in prepare_scraper master_set.extend(await self.datascraper.get_all_stories(subscription)) File "M:\OnlyFans\modules\onlyfans.py", line 388, in get_all_stories highlights = await subscription.get_highlights() File "M:\OnlyFans\apis\onlyfans\classes\user_model.py", line 283, in get_highlights results = [create_highlight(x) for x in results] File "M:\OnlyFans\apis\onlyfans\classes\user_model.py", line 283, in results = [create_highlight(x) for x in results] File "M:\OnlyFans\apis\onlyfans\classes\hightlight_model.py", line 3, in init self.id: int = option.get("id") AttributeError: 'str' object has no attribute 'get' Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001A639B7DEA0> Traceback (most recent call last): File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 116, in del self.close() File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 108, in close self._loop.call_soon(self._call_connection_lost, None) File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 750, in call_soon self._check_closed() File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed

Troll180 commented 2 years ago

Not gonna help but I can relate the exact same issue, everything was working perfectly not later than ~15h ago.

JamiDEV commented 2 years ago

im having the same issue

guynextdoorza commented 2 years ago

same issue here too

jp1151419 commented 2 years ago

I ran python -v start_ofd.py and it's only the Stories API which is causing issues for me. You can change your config file to "auto_api_choice": "2,3,4,5,6,7", instead of "auto_api_choice": true,

edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04. edit2: for clarification this works with the most recent commit (85aec02) as well

JamiDEV commented 2 years ago

I ran python -v start_ofd.py and it's only the Stories API which is causing issues for me. You can change your config file to "auto_api_choice": "2,3,4,5,6,7", instead of "auto_api_choice": true,

edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04.

ima give this a shot

JamiDEV commented 2 years ago

@jp1151419 it worked! good stuff bro

naodrej commented 2 years ago

I ran python -v start_ofd.py and it's only the Stories API which is causing issues for me. You can change your config file to "auto_api_choice": "2,3,4,5,6,7", instead of "auto_api_choice": true,

edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04.

This didn't work for me. I'm using v7.6.1 Do you know a solution for that one?

JamiDEV commented 2 years ago

I ran python -v start_ofd.py and it's only the Stories API which is causing issues for me. You can change your config file to "auto_api_choice": "2,3,4,5,6,7", instead of "auto_api_choice": true, edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04.

This didn't work for me. I'm using v7.6.1 Do you know a solution for that one?

you might have to role back a few versions, im using v7.4.1

jp1151419 commented 2 years ago

@naodrej @JamiDEV

I just tried with 85aec02, the most recent commit and it works with that as well.

guynextdoorza commented 2 years ago

@naodrej @JamiDEV

I just tried with 85aec02, the most recent commit and it works with that as well.

Yeah this worked for me on the latest commit too.

Troll180 commented 2 years ago

Changed the auto_api_choice worked for me as I'm too noob to rollback my version :) thanks for the help guys!

JamiDEV commented 2 years ago

@naodrej @JamiDEV I just tried with 85aec02, the most recent commit and it works with that as well.

Yeah this worked for me on the latest commit too.

Correction, I am not using 7.4, im using the 7.6. it has worked for both.

aboredpervert commented 2 years ago

It seems they are now returning the data wrapped into a {data: ..., hasMore: false} object.

JamiDEV commented 2 years ago

It seems they are now returning the data wrapped into a {data: ..., hasMore: false} object.

is this a new error?

RonnieBlaze commented 2 years ago

well ive tried 7.6.1, 7.6 and the most recent comment and im still getting the error and after so many attempts i get logged out and have to redo my auth file.

mediaburnwayne commented 2 years ago

That seems to have done the trick

halibaxgravlax commented 2 years ago

For what it's worth - I was not able to get this to work setting"auto_api_choice": "2,3,4,5,6,7" however, I set it to "auto_api_choice": false and manually selected the api when prompted by start_ofd.py and it seemed to do the trick.

For reference- I am using 7.6.1

RonnieBlaze commented 2 years ago

For what it's worth - I was not able to get this to work setting"auto_api_choice": "2,3,4,5,6,7" however, I set it to "auto_api_choice": false and manually selected the api when prompted by start_ofd.py and it seemed to do the trick.

For reference- I am using 7.6.1

This is working for me, running 7.6 or 7.6.1, its just a bit tedious. but for now i wont do entire scrape of all the people i follow just one at a time.

xxx-uploaded commented 2 years ago

I ran python -v start_ofd.py and it's only the Stories API which is causing issues for me. You can change your config file to "auto_api_choice": "2,3,4,5,6,7", instead of "auto_api_choice": true,

edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04. edit2: for clarification this works with the most recent commit (85aec02) as well

"version": 7.2 https://prnt.sc/t7iJc8FW2JAT

JamiDEV commented 2 years ago

For what it's worth - I was not able to get this to work setting"auto_api_choice": "2,3,4,5,6,7" however, I set it to "auto_api_choice": false and manually selected the api when prompted by start_ofd.py and it seemed to do the trick. For reference- I am using 7.6.1

This is working for me, running 7.6 or 7.6.1, its just a bit tedious. but for now i wont do entire scrape of all the people i follow just one at a time.

I tried this method and this works as well.

drobb0690 commented 2 years ago

i started getting this error as well today and switched auto_api_choice to false and it works, just the downside is having to manually select each api for each model unless someone knows a way around this. also how do i know what version/commit etc im using.

jp1151419 commented 2 years ago

@drobb0690 That's why I suggested people change to "auto_api_choice": "2,3,4,5,6,7", it'll allow you to scrape all the other content until Stories are fixed by Digital Criminals.

In the config file you can see what version you're running (7.6, 8.0, etc.) but for the build versions I actually take note when I upgrade so I can try another when an issue arises.

Since there are still ways of making the latest work, I'd recommend you run the update and then you'll know you're on 85aec02

RonnieBlaze commented 2 years ago

my config says version 7.2 and i know for a fact that i am not using 7.2.. I tried running the updater.py and it broke mine from even running.

downloading the latest comment upgrade python to 3.10.5 and running the "pip install poetry" & "poetry install --no-dev" fresh folder gets me this

F:\Users\xxx\Downloads\OnlyFans - Test>python start_ofd.py Traceback (most recent call last): File "F:\Users\xxx\Downloads\OnlyFans - Test\start_ofd.py", line 9, in from helpers.main_helper import OptionsFormat File "F:\Users\xxx\Downloads\OnlyFans - Test\helpers\main_helper.py", line 23, in import orjson ModuleNotFoundError: No module named 'orjson'

not sure if a reboot will help after the python upgrade ill try it later.

naodrej commented 2 years ago

In the file apis\onlyfans\classes\create_user.py change the line 270 and apis\onlyfans\classes\user_model.py line 283: results = [create_highlight(x) for x in results] to results = [create_highlight(x) for x in results.get("data")]

Don't need to change your auto_media_choice or auto_api_choice

JamiDEV commented 2 years ago

In the file apis\onlyfans\classes\create_user.py change the line 270 and apis\onlyfans\classes\user_model.py line 283: results = [create_highlight(x) for x in results] to results = [create_highlight(x) for x in results.get("data")]

Don't need to change your auto_media_choice or auto_api_choice

this is the best fix for the issue

drobb0690 commented 2 years ago

results = [create_highlight(x) for x in results.get("data")]

this worked for me as well

@jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.

JamiDEV commented 2 years ago

results = [create_highlight(x) for x in results.get("data")]

this worked for me as well

@jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.

i stopped using the updater unless scarper functions are broken, it always seems like trivial python errors that cause the issue

RonnieBlaze commented 2 years ago

results = [create_highlight(x) for x in results.get("data")]

this worked for me as well

@jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.

results = [create_highlight(x) for x in results.get("data")]

this worked for me as well @jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.

i stopped using the updater unless scarper functions are broken, it always seems like trivial python errors that cause the issue

From what i can tell, the newest comment needs Poetry installed, it listed under the Mandatory Tutorial, so when you run the updater it grabs the newest files, but since you done have poetry installed it doesn't run. and also you need python 3.10.1 or higher to install poetry. I have tried installing everything and updating python to newest 3.10.5, but i still can not get the most current comment version to work.

SupaStar commented 2 years ago

I ran python -v start_ofd.py and it's only the Stories API which is causing issues for me. You can change your config file to "auto_api_choice": "2,3,4,5,6,7", instead of "auto_api_choice": true,

edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04. edit2: for clarification this works with the most recent commit (85aec02) as well

This work for me, change the settings>config.json

cuenta111 commented 2 years ago

can anyone message me at discord at johndoe001#2213 and help me run the scraper. tried the solutions above, the scraper runs but doesnt download a single pic or video from the profile and says scraping finsihed

cuenta111 commented 2 years ago

GETTING THIS ERROR WHEN I TRY TO CREATE .JOHN AND CONFIG FILE

c:\of>python start_ofd.py Traceback (most recent call last): File "c:\of\start_ofd.py", line 9, in from helpers.main_helper import OptionsFormat File "c:\of\helpers\main_helper.py", line 21, in import classes.make_settings as make_settings File "c:\of\classes\make_settings.py", line 6, in from yarl import URL ModuleNotFoundError: No module named 'yarl'

PLEASE HELP ME SOLVE THIS ISSUE

xxx-uploaded commented 2 years ago

can anyone message me at discord at johndoe001#2213 and help me run the scraper. tried the solutions above, the scraper runs but doesnt download a single pic or video from the profile and says scraping finsihed

The same problem, even expired profiles are added to the parsing

MonkeyKingViper commented 2 years ago

518 has the fix. Don't turn your auto_api_choice off

xxx-uploaded commented 2 years ago

Choose Subscriptions: 0 = All | 1 = sexylexxxyp 0 Scraping Subscriptions Scrape Processing Name: sexylexxxyp Type: Stories Type: Posts Scrape Attempt: 1/100 Type: Archived Posts Scrape Attempt: 1/100 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1331.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:27<00:00, 309.57it/s] Type: Archived Type: Chats Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 315.50it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 249.77it/s] Type: Highlights Type: MassMessages Scrape Completed

Scraping Paid Content Scraping - sexylexxxyp | 1 / 1 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1332.83it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:26<00:00, 324.21it/s] Processing Scraped Posts 100%|█████████████████████████████████████████████████████████████████████████████| 1191/1191 [00:01<00:00, 973.81it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:32<00:00, 261.75it/s] Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 333.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 285.43it/s] Scraping Message Content Scrape Processing Name: ladiepop Type: Messages Processing Scraped Messages 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1997.29it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Scrape Completed

Scrape Processing Name: cupcakecutie_98 Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 64.46it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 62.44it/s] Scrape Completed

Scrape Processing Name: justjessicaxxxtrial Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 999.12it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 999.12it/s] Scrape Completed

Scrape Processing Name: angeliyarose Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7496.08it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<?, ?it/s] Scrape Completed

PLEASE HELP ME The same problem, even expired profiles are added to the parsing

guynextdoorza commented 2 years ago

Choose Subscriptions: 0 = All | 1 = sexylexxxyp 0 Scraping Subscriptions Scrape Processing Name: sexylexxxyp Type: Stories Type: Posts Scrape Attempt: 1/100 Type: Archived Posts Scrape Attempt: 1/100 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1331.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:27<00:00, 309.57it/s] Type: Archived Type: Chats Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 315.50it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 249.77it/s] Type: Highlights Type: MassMessages Scrape Completed

Scraping Paid Content Scraping - sexylexxxyp | 1 / 1 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1332.83it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:26<00:00, 324.21it/s] Processing Scraped Posts 100%|█████████████████████████████████████████████████████████████████████████████| 1191/1191 [00:01<00:00, 973.81it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:32<00:00, 261.75it/s] Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 333.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 285.43it/s] Scraping Message Content Scrape Processing Name: ladiepop Type: Messages Processing Scraped Messages 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1997.29it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Scrape Completed

Scrape Processing Name: cupcakecutie_98 Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 64.46it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 62.44it/s] Scrape Completed

Scrape Processing Name: justjessicaxxxtrial Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 999.12it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 999.12it/s] Scrape Completed

Scrape Processing Name: angeliyarose Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7496.08it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<?, ?it/s] Scrape Completed

PLEASE HELP ME The same problem, even expired profiles are added to the parsing

Check your config.json file. If under "jobs" > "scrape", you have "messages" set to true, then it will scrape all your messages including those from expired profiles. You only need to set this to true if you want to scrape your messaging history with all profiles you've subscribed to currently and historically. Setting this to false, but keeping "subscriptions" set to true will scrape all your active profiles (including messages from them).

xxx-uploaded commented 2 years ago

Check your config.json file. If under "jobs" > "scrape", you have "messages" set to true, then it will scrape all your messages including those from expired profiles. You only need to set this to true if you want to scrape your messaging history with all profiles you've subscribed to currently and historically. Setting this to false, but keeping "subscriptions" set to true will scrape all your active profiles (including messages from them).

"jobs": { "scrape": { "subscriptions": true, "messages": false, "paid_content": true }, Tranks