taux1c / onlyfans-scraper

A tool that allows you to print to file all content you are subscribed to on onlyfans including content you have unlocked or has been sent to you in messages.
MIT License
289 stars 34 forks source link

How to solve this? #74

Open XingGW opened 9 months ago

XingGW commented 9 months ago

Microsoft Windows [版本 10.0.19045.3448] (c) Microsoft Corporation。保留所有权利。

C:\Users\Wilson.config\onlyfans-scraper>onlyfans-scraper Traceback (most recent call last): File "", line 198, in _run_module_as_main File "", line 88, in _run_code File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Scripts\onlyfans-scraper.exe__main__.py", line 4, in File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\site-packages\onlyfans_scraper\scraper.py", line 21, in from .api import init, highlights, me, messages, posts, profile, subscriptions, paid File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\site-packages\onlyfans_scraper\api\paid.py", line 18, in config = read_config()['config'] ^^^^^^^^^^^^^ File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\site-packages\onlyfans_scraper\utils\config.py", line 26, in read_config config = json.load(f) ^^^^^^^^^^^^ File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\json__init.py", line 293, in load return loads(fp.read(), ^^^^^^^^^^^^^^^^ File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\json\init__.py", line 346, in loads return _default_decoder.decode(s) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Wilson\AppData\Local\Programs\Python\Python311\Lib\json\decoder.py", line 353, in raw_decode obj, end = self.scan_once(s, idx) ^^^^^^^^^^^^^^^^^^^^^^ json.decoder.JSONDecodeError: Invalid \escape: line 4 column 29 (char 86)

C:\Users\Wilson.config\onlyfans-scraper>

taux1c commented 8 months ago

This script isn't maintained anymore and will not work. Even if you fixed this error you would keep getting errors. To fix this would require a complete re-write of the api calls and added functionality. I just left the code up so if someone wanted to they could base a new scraper from this one.