Closed mediaburnwayne closed 1 year ago
Not gonna help but I can relate the exact same issue, everything was working perfectly not later than ~15h ago.
im having the same issue
same issue here too
I ran python -v start_ofd.py
and it's only the Stories API which is causing issues for me.
You can change your config file to "auto_api_choice": "2,3,4,5,6,7",
instead of "auto_api_choice": true,
edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04. edit2: for clarification this works with the most recent commit (85aec02) as well
I ran
python -v start_ofd.py
and it's only the Stories API which is causing issues for me. You can change your config file to"auto_api_choice": "2,3,4,5,6,7",
instead of"auto_api_choice": true,
edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04.
ima give this a shot
@jp1151419 it worked! good stuff bro
I ran
python -v start_ofd.py
and it's only the Stories API which is causing issues for me. You can change your config file to"auto_api_choice": "2,3,4,5,6,7",
instead of"auto_api_choice": true,
edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04.
This didn't work for me. I'm using v7.6.1 Do you know a solution for that one?
I ran
python -v start_ofd.py
and it's only the Stories API which is causing issues for me. You can change your config file to"auto_api_choice": "2,3,4,5,6,7",
instead of"auto_api_choice": true,
edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04.This didn't work for me. I'm using v7.6.1 Do you know a solution for that one?
you might have to role back a few versions, im using v7.4.1
@naodrej @JamiDEV
I just tried with 85aec02, the most recent commit and it works with that as well.
@naodrej @JamiDEV
I just tried with 85aec02, the most recent commit and it works with that as well.
Yeah this worked for me on the latest commit too.
Changed the auto_api_choice worked for me as I'm too noob to rollback my version :) thanks for the help guys!
@naodrej @JamiDEV I just tried with 85aec02, the most recent commit and it works with that as well.
Yeah this worked for me on the latest commit too.
Correction, I am not using 7.4, im using the 7.6. it has worked for both.
It seems they are now returning the data wrapped into a {data: ..., hasMore: false}
object.
It seems they are now returning the data wrapped into a
{data: ..., hasMore: false}
object.
is this a new error?
well ive tried 7.6.1, 7.6 and the most recent comment and im still getting the error and after so many attempts i get logged out and have to redo my auth file.
That seems to have done the trick
For what it's worth - I was not able to get this to work setting"auto_api_choice": "2,3,4,5,6,7"
however, I set it to "auto_api_choice": false
and manually selected the api when prompted by start_ofd.py and it seemed to do the trick.
For reference- I am using 7.6.1
For what it's worth - I was not able to get this to work setting
"auto_api_choice": "2,3,4,5,6,7"
however, I set it to"auto_api_choice": false
and manually selected the api when prompted by start_ofd.py and it seemed to do the trick.For reference- I am using 7.6.1
This is working for me, running 7.6 or 7.6.1, its just a bit tedious. but for now i wont do entire scrape of all the people i follow just one at a time.
I ran
python -v start_ofd.py
and it's only the Stories API which is causing issues for me. You can change your config file to"auto_api_choice": "2,3,4,5,6,7",
instead of"auto_api_choice": true,
edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04. edit2: for clarification this works with the most recent commit (85aec02) as well
"version": 7.2 https://prnt.sc/t7iJc8FW2JAT
For what it's worth - I was not able to get this to work setting
"auto_api_choice": "2,3,4,5,6,7"
however, I set it to"auto_api_choice": false
and manually selected the api when prompted by start_ofd.py and it seemed to do the trick. For reference- I am using 7.6.1This is working for me, running 7.6 or 7.6.1, its just a bit tedious. but for now i wont do entire scrape of all the people i follow just one at a time.
I tried this method and this works as well.
i started getting this error as well today and switched auto_api_choice to false and it works, just the downside is having to manually select each api for each model unless someone knows a way around this. also how do i know what version/commit etc im using.
@drobb0690 That's why I suggested people change to "auto_api_choice": "2,3,4,5,6,7"
, it'll allow you to scrape all the other content until Stories are fixed by Digital Criminals.
In the config file you can see what version you're running (7.6, 8.0, etc.) but for the build versions I actually take note when I upgrade so I can try another when an issue arises.
Since there are still ways of making the latest work, I'd recommend you run the update and then you'll know you're on 85aec02
my config says version 7.2 and i know for a fact that i am not using 7.2.. I tried running the updater.py and it broke mine from even running.
downloading the latest comment upgrade python to 3.10.5 and running the "pip install poetry" & "poetry install --no-dev" fresh folder gets me this
F:\Users\xxx\Downloads\OnlyFans - Test>python start_ofd.py
Traceback (most recent call last):
File "F:\Users\xxx\Downloads\OnlyFans - Test\start_ofd.py", line 9, in
not sure if a reboot will help after the python upgrade ill try it later.
In the file apis\onlyfans\classes\create_user.py change the line 270 and apis\onlyfans\classes\user_model.py line 283:
results = [create_highlight(x) for x in results]
to
results = [create_highlight(x) for x in results.get("data")]
Don't need to change your auto_media_choice or auto_api_choice
In the file apis\onlyfans\classes\create_user.py change the line 270 and apis\onlyfans\classes\user_model.py line 283:
results = [create_highlight(x) for x in results]
toresults = [create_highlight(x) for x in results.get("data")]
Don't need to change your auto_media_choice or auto_api_choice
this is the best fix for the issue
results = [create_highlight(x) for x in results.get("data")]
this worked for me as well
@jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.
results = [create_highlight(x) for x in results.get("data")]
this worked for me as well
@jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.
i stopped using the updater unless scarper functions are broken, it always seems like trivial python errors that cause the issue
results = [create_highlight(x) for x in results.get("data")]
this worked for me as well
@jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.
results = [create_highlight(x) for x in results.get("data")]
this worked for me as well @jp1151419 I'm running 7.2 it seems and every time ive ever ran the updater it has as someone else said ruined the whole install and i have to then redo everything and figure out what went wrong, so I just stopped and I tend to only get the full releases when the one I'm using seems to stop.
i stopped using the updater unless scarper functions are broken, it always seems like trivial python errors that cause the issue
From what i can tell, the newest comment needs Poetry installed, it listed under the Mandatory Tutorial, so when you run the updater it grabs the newest files, but since you done have poetry installed it doesn't run. and also you need python 3.10.1 or higher to install poetry. I have tried installing everything and updating python to newest 3.10.5, but i still can not get the most current comment version to work.
I ran
python -v start_ofd.py
and it's only the Stories API which is causing issues for me. You can change your config file to"auto_api_choice": "2,3,4,5,6,7",
instead of"auto_api_choice": true,
edit: I tried a couple builds because past issues were avoided by using an older version. This fix works using build e199a04. edit2: for clarification this works with the most recent commit (85aec02) as well
This work for me, change the settings>config.json
can anyone message me at discord at johndoe001#2213 and help me run the scraper. tried the solutions above, the scraper runs but doesnt download a single pic or video from the profile and says scraping finsihed
GETTING THIS ERROR WHEN I TRY TO CREATE .JOHN AND CONFIG FILE
c:\of>python start_ofd.py
Traceback (most recent call last):
File "c:\of\start_ofd.py", line 9, in
PLEASE HELP ME SOLVE THIS ISSUE
can anyone message me at discord at johndoe001#2213 and help me run the scraper. tried the solutions above, the scraper runs but doesnt download a single pic or video from the profile and says scraping finsihed
The same problem, even expired profiles are added to the parsing
Choose Subscriptions: 0 = All | 1 = sexylexxxyp 0 Scraping Subscriptions Scrape Processing Name: sexylexxxyp Type: Stories Type: Posts Scrape Attempt: 1/100 Type: Archived Posts Scrape Attempt: 1/100 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1331.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:27<00:00, 309.57it/s] Type: Archived Type: Chats Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 315.50it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 249.77it/s] Type: Highlights Type: MassMessages Scrape Completed
Scraping Paid Content Scraping - sexylexxxyp | 1 / 1 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1332.83it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:26<00:00, 324.21it/s] Processing Scraped Posts 100%|█████████████████████████████████████████████████████████████████████████████| 1191/1191 [00:01<00:00, 973.81it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:32<00:00, 261.75it/s] Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 333.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 285.43it/s] Scraping Message Content Scrape Processing Name: ladiepop Type: Messages Processing Scraped Messages 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1997.29it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Scrape Completed
Scrape Processing Name: cupcakecutie_98 Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 64.46it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 62.44it/s] Scrape Completed
Scrape Processing Name: justjessicaxxxtrial Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 999.12it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 999.12it/s] Scrape Completed
Scrape Processing Name: angeliyarose Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7496.08it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<?, ?it/s] Scrape Completed
PLEASE HELP ME The same problem, even expired profiles are added to the parsing
Choose Subscriptions: 0 = All | 1 = sexylexxxyp 0 Scraping Subscriptions Scrape Processing Name: sexylexxxyp Type: Stories Type: Posts Scrape Attempt: 1/100 Type: Archived Posts Scrape Attempt: 1/100 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1331.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:27<00:00, 309.57it/s] Type: Archived Type: Chats Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 315.50it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 249.77it/s] Type: Highlights Type: MassMessages Scrape Completed
Scraping Paid Content Scraping - sexylexxxyp | 1 / 1 Processing Scraped Posts 100%|████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:06<00:00, 1332.83it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:26<00:00, 324.21it/s] Processing Scraped Posts 100%|█████████████████████████████████████████████████████████████████████████████| 1191/1191 [00:01<00:00, 973.81it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|█████████████████████████████████████████████████████████████████████████████| 8510/8510 [00:32<00:00, 261.75it/s] Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00<00:00, 333.02it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 285.43it/s] Scraping Message Content Scrape Processing Name: ladiepop Type: Messages Processing Scraped Messages 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1997.29it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s] Scrape Completed
Scrape Processing Name: cupcakecutie_98 Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 64.46it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 62.44it/s] Scrape Completed
Scrape Processing Name: justjessicaxxxtrial Type: Messages Processing Scraped Messages 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 999.12it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|███████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 999.12it/s] Scrape Completed
Scrape Processing Name: angeliyarose Type: Messages Processing Scraped Messages 100%|████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7496.08it/s] Processing metadata. Finished processing metadata. Renaming files. 100%|████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:00<?, ?it/s] Scrape Completed
PLEASE HELP ME The same problem, even expired profiles are added to the parsing
Check your config.json file. If under "jobs" > "scrape", you have "messages" set to true, then it will scrape all your messages including those from expired profiles. You only need to set this to true if you want to scrape your messaging history with all profiles you've subscribed to currently and historically. Setting this to false, but keeping "subscriptions" set to true will scrape all your active profiles (including messages from them).
Check your config.json file. If under "jobs" > "scrape", you have "messages" set to true, then it will scrape all your messages including those from expired profiles. You only need to set this to true if you want to scrape your messaging history with all profiles you've subscribed to currently and historically. Setting this to false, but keeping "subscriptions" set to true will scrape all your active profiles (including messages from them).
"jobs": { "scrape": { "subscriptions": true, "messages": false, "paid_content": true }, Tranks
Scraping Subscriptions Scrape Processing Name: ofcreator Type: Stories Traceback (most recent call last): File "M:\OnlyFans\start_ofd.py", line 66, in
asyncio.run(main())
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
return future.result()
File "M:\OnlyFans\start_ofd.py", line 52, in main
api = await main_datascraper.start_datascraper(config, site_name)
File "M:\OnlyFans\datascraper\main_datascraper.py", line 131, in start_datascraper
await default(datascraper)
File "M:\OnlyFans\datascraper\main_datascraper.py", line 101, in default
await main_helper.process_jobs(datascraper, subscription_list, site_settings)
File "M:\OnlyFans\helpers\main_helper.py", line 1040, in process_jobs
await datascraper.start_datascraper(authed, subscription.username)
File "M:\OnlyFans\modules\module_streamliner.py", line 84, in start_datascraper
await self.prepare_scraper(subscription, content_type)
File "M:\OnlyFans\modules\module_streamliner.py", line 232, in prepare_scraper
master_set.extend(await self.datascraper.get_all_stories(subscription))
File "M:\OnlyFans\modules\onlyfans.py", line 388, in get_all_stories
highlights = await subscription.get_highlights()
File "M:\OnlyFans\apis\onlyfans\classes\user_model.py", line 283, in get_highlights
results = [create_highlight(x) for x in results]
File "M:\OnlyFans\apis\onlyfans\classes\user_model.py", line 283, in
results = [create_highlight(x) for x in results]
File "M:\OnlyFans\apis\onlyfans\classes\hightlight_model.py", line 3, in init
self.id: int = option.get("id")
AttributeError: 'str' object has no attribute 'get'
Exception ignored in: <function _ProactorBasePipeTransport.del at 0x000001A639B7DEA0>
Traceback (most recent call last):
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 116, in del
self.close()
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 750, in call_soon
self._check_closed()
File "C:\Users\user1\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed