datawhores / OF-Scraper

A completely revamped and redesigned fork, reimagined from scratch based on the original onlyfans-scraper
MIT License
656 stars 53 forks source link

Hanging on downloading messages for a model #378

Closed Nostang3 closed 2 months ago

Nostang3 commented 4 months ago

Describe the bug

When trying to download a specific users data that include messages it hangs forever. No error messages appear on screen or in the log. Have to force close to stop it. This just started happening on this update I believe.

To Reproduce

Steps to reproduce the behavior:

  1. Perform Actions
  2. Download content from a user
  3. False
  4. Select: Messages
  5. Download from specific model
  6. Profit

Expected behavior

Download requested data

Screenshots/Logs

See attached/linked

This is the part on the gui where it get stuck and the time just ticks up forever.

Subscription Active: True
 [messages.log_after_before:494]                                                                                                                messages.py:494
Setting Message scan range for nikolemitchell from 2024-04-19 03:00:40 to 2024-05-20 00:07:58

Hint: append ' --after 2000' to command to force scan of all messages + download of new files only
Hint: append ' --after 2000 --force-all' to command to force scan of all messages + download/re-download of all files

╭
 [helpers.posts_type_filter:77]  filtering Media to images,audios,videos                                                                          helpers.py:77
 [helpers.previous_download_filter:215]  reading database to retrive previous downloads                                                          helpers.py:215
 [helpers.previous_download_filter:233]  Downloading unique media across all models                                                              helpers.py:233
 [text.textDownloader:21]  Skipping Downloading of Text Files                                                                                        text.py:21
Progress: (0 photos, 0 videos, 0 audios, 0 skipped, 0 failed || 0/150||0 B/2.27 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━   0% 0:00:44

Config

See attached/linked

System Info

I usually do a full scrape once or twice a day of all my models and it always hangs on the particular model. I can run the script just for that single user for the hang and so a full scrape is not the issue. It will go through at least a dozen or two of other models during a full scrape without issue so I'm not sure what this model our messages that would cause it hang. She's been sending a lot of audio messages lately. I downloaded from another model that i know I have audio messages and that finished. Although it did take an longer than expected time to say it completed download for the audio for the second model. Maybe 10-20 seconds. Or maybe it was stuck doing something with the 2nd of the two video files that it said it grabbed in the gui. It was downloading 3 files, 2 video and 1 audio. It said it had grabbed 2 video, and the audio was yet to download which is why I assume what I assume.

all but messages.txt config.json just messages.txt

CannotTouch commented 4 months ago

I have the same problem on several model with the 3.9.8, but if i use the 3.8.6 works on the models that are stuck on 3.9.8 but the 3.8.6 stuck on others model (that works on 3.9.8). No error messages in the log neither with the debug level. It's just stuck with the timer the grow and grow but nothing is downloaded anymore.

datawhores commented 4 months ago

Please try 3.10dev3 and see if that fixes the issue

Nostang3 commented 4 months ago

Please try 3.10dev3 and see if that fixes the issue

I originally install via pip so when I try to install this version it doesn't show up. It shows 3.10.dev2. So I downloaded the exe and ran that. I set it to Messages only because that is what stalled while downloading. While it didn't stall it also didn't download anything. The last download for messages folder is the 13th. It's skipping all the images and audios in the messages. Error from log attached.

Auto updating config...
 [logs.printStartValues:28]  Log Level: DEBUG                                                                                                        logs.py:28
 [logs.printStartValues:33]  config path: C:\Users\your_username\.config\ofscraper\config.json                                                       logs.py:33
 [logs.printStartValues:34]  profile path: C:\Users\your_username\.config\ofscraper\main_profile                                                     logs.py:34
 [logs.printStartValues:35]  log folder: C:\Users\your_username\.config\ofscraper\logging                                                            logs.py:35

 _______  _______         _______  _______  _______  _______  _______  _______  _______
(  ___  )(  ____ \       (  ____ \(  ____ \(  ____ )(  ___  )(  ____ )(  ____ \(  ____ )
| (   ) || (    \/       | (    \/| (    \/| (    )|| (   ) || (    )|| (    \/| (    )|
| |   | || (__     _____ | (_____ | |      | (____)|| (___) || (____)|| (__    | (____)|
| |   | ||  __)   (_____)(_____  )| |      |     __)|  ___  ||  _____)|  __)   |     __)
| |   | || (                   ) || |      | (\ (   | (   ) || (      | (      | (\ (
| (___) || )             /\____) || (____/\| ) \ \__| )   ( || )      | (____/\| ) \ \__
(_______)|/              \_______)(_______/|/   \__/|/     \||/       (_______/|/   \__/

Version 3.10.dev1
 [paths.temp_cleanup:42]  Cleaning up temp files                                                                                                    paths.py:42

Key Mode: manual

WARNING:Make sure you have all the correct settings for choosen cdm
https://of-scraper.gitbook.io/of-scraper/cdm-options
? Main Menu: What would you like to do? Perform Action(s)
? Action Menu: What action(s) would you like to take? Download content from a user
? Scrape entire paid page

[Warning: initial Scan can be slow]
[Caution: You should not need this unless you are looking to scrape paid content from a deleted/banned model] False
 [scrape_context.scrape_context_manager:28]                                                                                                scrape_context.py:28
==============================
 starting script
==============================

 [tools.print_current_profile:37]  Using profile: main_profile                                                                                      tools.py:37
Status - UP
OF-Scraper
╭───────────────────────────────────────────────────────────────────── Activity Progress ─────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
? Which area(s) would you do you want to download ['Messages']
Welcome, REDACTED | REDACTED
Warning: Numbering on OF site can be iffy
Example Including deactived accounts in expired
See: https://of-scraper.gitbook.io/of-scraper/faq#number-of-users-doesnt-match-account-number
╭───────────────────────────────────────────────────────────────────── Activity Progress ─────────────────────────────────────────────────────────────────────╮
│ Getting subscriptions                                                                                                                                       │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
? Which models do you want to scrape
: nikolemitchell
╭───────────────────────────────────────────────────────────────────── Activity Progress ─────────────────────────────────────────────────────────────────────╮
│ Getting subscriptions                                                                                                                                       │
│ Users with Actions Completed ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/1                       │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────── API Progress ────────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────── API Messages ────────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ [messages.log_after_before:467]                                                                                                                messages.py:467
Setting Message scan range for nikolemitchell from 2024-04-24 03:00:56 to 2024-05-29 20:43:45

Hint: append ' --after 2000' to command to force scan of all messages + download of new files only
Hint: append ' --after 2000 --force-all' to command to force scan of all messages + download/re-download of all files

 [helpers.posts_type_filter:79]  filtering Media to images,audios,videos                                                                          helpers.py:79
 [helpers.previous_download_filter:241]  reading database to retrive previous downloads                                                          helpers.py:241
 [helpers.previous_download_filter:259]  Downloading unique media across all models                                                              helpers.py:259
 [text.textDownloader:18]  Skipping Downloading of Text Files                                                                                        text.py:18
╭───────────────────────────────────────

────────────────────────────── Activity Progress ─────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────── API Progress ────────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────── API Messages ────────────────────────────────────────────────────────────────────────╮
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
 [scrape_context.scrape_context_manager:37]                                                                                                scrape_context.py:37
===========================
 Script Finished
Run Time:  0:01:04
===========================

3.10.dev3.log

CannotTouch commented 4 months ago

For me is still stuck with the commit bc49446

https://github.com/datawhores/OF-Scraper/issues/388#issuecomment-2141441018

datawhores commented 4 months ago

can you try the latest version

Nostang3 commented 4 months ago

can you try the latest version

That worked on both models I was having issues with. I'm going to do a full scan with it to test that but I don't expect problems since the two models work now. Thanks.

Nostang3 commented 4 months ago

I did get this error when downloading a model when it tried to download feed pictures. Is this just a dev thing or should I make a new issue?

 2024-05-31 20:10:38:[downloadnormal.process_dicts:96]  Media:3341359328 Post:1095662590 Download Failed because
module 'ofscraper.download.shared.globals.globals' has no attribute 'maxfile_sem'
 2024-05-31 20:10:38:[helpers.inner:11]  Traceback (most recent call last):
  File "ofscraper\download\downloadnormal.py", line 92, in process_dicts
  File "asyncio\tasks.py", line 605, in _wait_for_one
  File "ofscraper\download\downloadnormal.py", line 165, in download
AttributeError: module 'ofscraper.download.shared.globals.globals' has no attribute 'maxfile_sem'
datawhores commented 4 months ago

It should be fix in the next version

CannotTouch commented 4 months ago

I confirm that with the https://github.com/datawhores/OF-Scraper/commit/6f3eb4edcf3b861a98d0bae320a7f403b4ca77b8 i'm not anymore stucked. thanks for support

datawhores commented 2 months ago

Closing