Closed tamodolo closed 3 weeks ago
I somehow still can't reproduce this (I don't have an account):
yt-dlp https://www.crunchyroll.com/watch/G64P1GE8R/fortune-is-unpredictable-and-mutable --simulate --verbose
[debug] Command-line config: ['https://www.crunchyroll.com/watch/G64P1GE8R/fortune-is-unpredictable-and-mutable', '--simulate', '--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2023.07.06.194006 [92315c037] (linux_exe)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-6.4.2-surface-x86_64-with-glibc2.37 (OpenSSL 3.1.1 30 May 2023, glibc 2.37)
[debug] exe versions: ffmpeg 5.1.2 (setts), ffprobe 5.1.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[debug] Proxy map: {}
[debug] Loaded 1855 extractors
[crunchyroll] Extracting URL: https://www.crunchyroll.com/watch/G64P1GE8R/fortune-is-unpredictable-and-mutable
[crunchyroll] Retrieving main page (lang=None)
[debug] [crunchyroll] Using cxApiParam=cr_web
[crunchyroll] Authenticating with grant_type=client_id
[crunchyroll] G64P1GE8R: Downloading object info JSON (cms)
[crunchyroll] Retrieving signed policy
[crunchyroll] G64P1GE8R: Downloading stream info JSON (signed cms)
[crunchyroll] To get all formats of a hardsub language, use "--extractor-args crunchyrollbeta:hardsub=<language_code or all>". See https://github.com/yt-dlp/yt-dlp#crunchyrollbeta-crunchyroll for more info
[crunchyroll] G64P1GE8R: Downloading adaptive_hls HLS manifest
[crunchyroll] G64P1GE8R: Downloading chapter info
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] G64P1GE8R: Downloading 1 format(s): adaptive_hls-6560
As for a no input workaround I just added a pass to my script that open firefox on CR main site. It's working but it's not ideal. Just a dirty patch.
For those who love using scripts, I believe curl is capable of generating valid cookies if you pass an invalid* user agent to it. For example, this should work, but I can't test it:
curl https://www.crunchyroll.com/ -c crunchyrollcookies.txt -H "User-Agent: ujhgtfjhgfbv"
yt-dlp URL --cookies crunchyrollcookies.txt --add-header "User-Agent: ujhgtfjhgfbv"
(don't bother editing the URL sent to Curl, any Crunchyroll URL should generate working cookies)
*I'm not joking, Crunchyroll blocks curl if it's using a valid chrome user agent. I guess it can identify that Curl isn't Chrome, but it can't identify that Curl isn't ujhgtfjhgfbv
(or any other random string)
On that note, I highly doubt this will work, but can someone who's experiencing this issue please test if --add-header "User-Agent: ujhgtfjhgfbv"
(with no cookies) fixes it
from curl_cffi import requests requests.get( url, impersonate="chrome110", )
Setting like I do when requesting resolves this issue
we have discussed about the possibility adding support for curl-impersonate sometime in the future. requires #2861 and some other things to be done before though.
This is not the setting of yt-dlp, this is my program for accessing the crunchyroll api, it can be accessed normally if I do it like this, otherwise there will be a 403 error,I'm sure this is the correct solution
I'm sure this is the correct solution
Yes, adding support for Curl Impersonate is the ultimate solution to this problem. It can't be added until https://github.com/yt-dlp/yt-dlp/pull/2861 is merged, but that PR is actually extremely close to being ready to be merged!
On that note, I highly doubt this will work, but can someone who's experiencing this issue please test if
--add-header "User-Agent: ujhgtfjhgfbv"
(with no cookies) fixes it
Also, can someone please test if cookies from Brave browser work, since Brave is based on Chrome but seems to send less headers
It can't be added until #2861 is merged, but that PR is actually extremely close to being ready to be merged!
It only adds the framework that would be needed to support this. Actually adding support would be quite a ways away.
On that note, I highly doubt this will work, but can someone who's experiencing this issue please test if
--add-header "User-Agent: ujhgtfjhgfbv"
(with no cookies) fixes it
Doesent work for me :/
On that note, I highly doubt this will work, but can someone who's experiencing this issue please test if
--add-header "User-Agent: ujhgtfjhgfbv"
(with no cookies) fixes itAlso, can someone please test if cookies from Brave browser work, since Brave is based on Chrome but seems to send less headers
Also doesent work for me :/
Also doesent work for me :/
Did you pass Brave's --user-agent
, after visiting Crunchyroll at least once?
whoops, visiting crunchyroll once won't work, you have to have visited it within the last few minutes
Do you get a "checking your browser" page every time you visit crunchyroll in brave?
For me: yes - I can't even pass this "checking your browser" page from Brave browser.
@rdamas do you have javascript disabled or something? Also, can you access kick.com (an unrelated site that also uses cloudflare ddos protection)
No, kick.com has the same issues. No, javascript is enabled, and joyn.de (e.g.) works perfectly well in brave with the same settings.
It's somewhat a waste of time disabling javascript nowadays. You can't possibly scape it anymore. The internet is javascript now. So disabling it will give more headaches that will solve without it. Also, the extra "privacy" you think it gives is ilusion. You put a cable in your PC, end of privacy period.
@rdamas does kick.com work in guest mode (click the 3 lines in the top right and then click open guest window
, which is on top of the zoom controls)?
It sounds like you have some setting that's making cloudflare DDOS protection hate you, and I'm really curious what that setting is? Are you using a VPN? Do you get that same screen in a different browser (in incognito mode)?
Works in private mode of the browser; it seems they can detect the installed "VideoDownloadHelper" extension. When deactivated, kick.com works.
Crunchyroll is usable in private mode as well, and I could download a video with yt-dlp using --cookies-from-browser brave and braves' user agent string. Platform: MacOs.
I could download a video with yt-dlp using --cookies-from-browser brave and braves' user agent string
So, just to clarify, the download works with --cookies-from-browser brave
(with brave's user agent, right after visiting the site and going through that screen), but not --cookies-from-browser chrome
(with chrome's user-agent, right after visiting the site and going through that screen)?
If that's the case, can you please compare https://www.whatismybrowser.com/detect/what-http-headers-is-my-browser-sending in both browsers, and try passing all the headers that only chrome has to yt-dlp, using --add-header 'HEADER: VALUE'
(eg --add-header 'SEC-CH-UA: "Not.A/Brand";v="8", "Chromium";v="114"'
)
Need to reinstall chrome first - haven't used chrome since ages. I'll check.
Update: Unfortunately (?) downloading a video using --cookies-from-browser chrome works for me as well.
If anyone can reproduce this and can test if https://github.com/yt-dlp/yt-dlp/pull/7595 can help to fix this that would be greatly appreciated.
In this dev branch you can hackily force curl-impersonate to be used for all urls with a env var for testing:
YT_DLP_CCI_IMPERSONATE=chrome110 yt-dlp -v URL
Same cookie file, same machine,
NOT WORKING using Windows, command running in Powershell (yt-dlp --version, 2023.07.06)
WORKING using Ubuntu, command running in WSL2 ubuntu shell (yt-dlp --version, 2023.07.06)
@fe80Grau cookies expire after 30 minutes. Have you run that multiple times to make sure it's not just random? Which OS and browser created the cookies? EDIT: Is it possible Linux is running from the latest commit and windows is running from the latest stable?
I have tryed it. Its always 30 min after refreshing the page in your browser/ making a new request and starting the ytdlp session.
@JajajTec tried what?
Oh sorry, i was in a hurry. I have tryed it with multiple browsers: chrome, brave, firefox All are the same. The cookies work for 30min after the last request the browser made (refreshing/loading a link, getting a video fragment) and inport into ytdlp. That means if you keep a video playing in the background and make ytdlp reload the cookies every 30 min, it would work no prob.
Hmm. Crunchy went through a maintenance outage about an hour ago. Now I get this error message, no matter what I try. Looks like support for Crunchy is gone for the moment...
ERROR: [crunchyroll] G14U4WXJ9: Unable to extract initial state; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
Anyone else having this problem or is it just me?
EDIT: I've just opened Crunchy in my Edge Browser and the Crunchy Website displayed a message "Your device has been trusted." or something like that (already closed the message and can't reproduce it).
@Killua-Zolldyck having the same problem
It's the same here now.
I tested MrKrabat's KODI plugin and still works. This plugin is amazing haha!
This is the verbose:
yt-dlp.exe` -v --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0" --cookies-from-browser firefox --sub-langs pt-BR --remux-video mkv --embed-subs https://www.crunchyroll.com/pt-br/watch/GK9U3MVX8 -o "%(season)s - %(episode_number)s.%(ext)s" --paths "C:\animetemp"
[debug] Command-line config: ['-v', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/115.0', '--cookies-from-browser', 'firefox', '--sub-langs', 'pt-BR', '--remux-video', 'mkv', '--embed-subs', 'https://www.crunchyroll.com/pt-br/watch/GK9U3MVX8', '-o', '%(season)s - %(episode_number)s.%(ext)s', '--paths', 'C:\\animetemp']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2022-12-04-git-6c814093d8-full_build-www.gyan.dev (setts), ffprobe 2022-12-04-git-6c814093d8-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[Cookies] Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\tamod\AppData\Roaming\Mozilla\Firefox\Profiles\dxxfcfla.default-release-1688405545434\cookies.sqlite"
[Cookies] Extracted 228 cookies from firefox
[debug] Proxy map: {}
[debug] Loaded 1851 extractors
[crunchyroll] Extracting URL: https://www.crunchyroll.com/pt-br/watch/GK9U3MVX8
[crunchyroll] Retrieving main page (lang=pt-br/)
ERROR: [crunchyroll] GK9U3MVX8: Unable to extract initial state; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 708, in extract
File "yt_dlp\extractor\crunchyroll.py", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.py", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.py", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.py", line 72, in _update_query
File "yt_dlp\extractor\common.py", line 1270, in _search_json
File "yt_dlp\extractor\common.py", line 1256, in _search_regex
The initial state
extractor bug is being tracked in its own issue, and there is a patch available here:
https://github.com/yt-dlp/yt-dlp/issues/7624#issuecomment-1638912586
Can confirm: the patch works for me.
Compiled master with patch. Confirm that works.
Same here. Compiled with the patch from 'bashonly'. Can confirm the patch. Works fine.
Good news: Cloudflare 403s are gone(?)
They're still there if you use --user-agent curl/7
. Perhaps crunchyroll just reduced their settings :eyes:
I wonder what I am doing wrong. I got a fresh copy from the main page, I changed lines as per bashonly suggestion. And I am still getting 404 (even with Firefox cookies, as it worked yesterday fine)
@Burve Try passing the --user-agent
of the browser you are loading cookies from
hmm, it looks like the user agent helps. and works with edge
@bashonly I'm not used to compiling python scripts. Is there any chance, there will be a nightly release including the patch?
Not sure why but I got this error for the first time today (maybe because I was already using both --cookies 'pathto/cookies.txt' and --user-agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36 Edg/109.0.1518.78' Today I got the same error as everybody else even with bashonly fix (I was able to compile my exe, but the error is the same) I think there is still a problem with the script:
[debug] Command-line config: ['https://www.crunchyroll.com/it/watch/G50UZ10D5', '--username', 'PRIVATE', '--password', 'PRIVATE', '--write-subs', '--sub-langs', 'it-IT', '--prefer-ffmpeg', '--no-check-certificate', '--cookies', 'PathToCookieFile\\cookies.txt', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36 Edg/109.0.1518.78', '--verbose']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.07.06 [b532a3481] (py2exe)
[debug] Python 3.11.0 (CPython AMD64 64bit) - Windows-10-10.0.19045 (OpenSSL 1.1.1q 5 Jul 2022)
[debug] exe versions: ffmpeg git-2020-08-02-b48397e, ffprobe git-2020-08-02-b48397e, rtmpdump 2.3
[debug] Optional libraries: brotli-1.0.9, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[debug] Proxy map: {}
[debug] Loaded 1855 extractors
[crunchyroll] Extracting URL: https://www.crunchyroll.com/it/watch/G50UZ10D5
[crunchyroll] Retrieving main page (lang=it/)
ERROR: [crunchyroll] G50UZ10D5: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.pyc", line 715, in extract
File "yt_dlp\extractor\crunchyroll.pyc", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.pyc", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.pyc", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.pyc", line 69, in _update_query
File "yt_dlp\extractor\common.pyc", line 1118, in _download_webpage
File "yt_dlp\extractor\common.pyc", line 1069, in download_content
File "yt_dlp\extractor\common.pyc", line 903, in _download_webpage_handle
File "yt_dlp\extractor\common.pyc", line 860, in _request_webpage
File "yt_dlp\networking\_urllib.pyc", line 432, in _send
File "urllib\request.pyc", line 525, in open
File "urllib\request.pyc", line 634, in http_response
File "urllib\request.pyc", line 563, in error
File "urllib\request.pyc", line 496, in _call_chain
File "urllib\request.pyc", line 643, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.pyc", line 4059, in urlopen
File "yt_dlp\networking\common.pyc", line 89, in send
File "yt_dlp\networking\_helper.pyc", line 203, in wrapper
File "yt_dlp\networking\common.pyc", line 297, in send
File "yt_dlp\networking\_urllib.pyc", line 437, in _send
yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "yt_dlp\extractor\common.pyc", line 847, in _request_webpage
File "yt_dlp\YoutubeDL.pyc", line 4078, in urlopen
yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden
Not sure why but I got this error for the first time today (maybe because I was already using both --cookies 'pathto/cookies.txt' and --user-agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36 Edg/109.0.1518.78' Today I got the same error as everybody else even with bashonly fix (I was able to compile my exe, but the error is the same) I think there is still a problem with the script:
[debug] Command-line config: ['https://www.crunchyroll.com/it/watch/G50UZ10D5', '--username', 'PRIVATE', '--password', 'PRIVATE', '--write-subs', '--sub-langs', 'it-IT', '--prefer-ffmpeg', '--no-check-certificate', '--cookies', 'PathToCookieFile\\cookies.txt', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36 Edg/109.0.1518.78', '--verbose'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2023.07.06 [b532a3481] (py2exe) [debug] Python 3.11.0 (CPython AMD64 64bit) - Windows-10-10.0.19045 (OpenSSL 1.1.1q 5 Jul 2022) [debug] exe versions: ffmpeg git-2020-08-02-b48397e, ffprobe git-2020-08-02-b48397e, rtmpdump 2.3 [debug] Optional libraries: brotli-1.0.9, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3 [debug] Proxy map: {} [debug] Loaded 1855 extractors [crunchyroll] Extracting URL: https://www.crunchyroll.com/it/watch/G50UZ10D5 [crunchyroll] Retrieving main page (lang=it/) ERROR: [crunchyroll] G50UZ10D5: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.pyc", line 715, in extract File "yt_dlp\extractor\crunchyroll.pyc", line 320, in _real_extract File "yt_dlp\extractor\crunchyroll.pyc", line 114, in _call_api File "yt_dlp\extractor\crunchyroll.pyc", line 99, in _call_base_api File "yt_dlp\extractor\crunchyroll.pyc", line 69, in _update_query File "yt_dlp\extractor\common.pyc", line 1118, in _download_webpage File "yt_dlp\extractor\common.pyc", line 1069, in download_content File "yt_dlp\extractor\common.pyc", line 903, in _download_webpage_handle File "yt_dlp\extractor\common.pyc", line 860, in _request_webpage File "yt_dlp\networking\_urllib.pyc", line 432, in _send File "urllib\request.pyc", line 525, in open File "urllib\request.pyc", line 634, in http_response File "urllib\request.pyc", line 563, in error File "urllib\request.pyc", line 496, in _call_chain File "urllib\request.pyc", line 643, in http_error_default urllib.error.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\YoutubeDL.pyc", line 4059, in urlopen File "yt_dlp\networking\common.pyc", line 89, in send File "yt_dlp\networking\_helper.pyc", line 203, in wrapper File "yt_dlp\networking\common.pyc", line 297, in send File "yt_dlp\networking\_urllib.pyc", line 437, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\extractor\common.pyc", line 847, in _request_webpage File "yt_dlp\YoutubeDL.pyc", line 4078, in urlopen yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden
bashonly fix only the initial state, for 403 the workaround is to use a cookie form the browser and refresh it every 30min.
Not sure why but I got this error for the first time today (maybe because I was already using both --cookies 'pathto/cookies.txt' and --user-agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36 Edg/109.0.1518.78' Today I got the same error as everybody else even with bashonly fix (I was able to compile my exe, but the error is the same) I think there is still a problem with the script:
[debug] Command-line config: ['https://www.crunchyroll.com/it/watch/G50UZ10D5', '--username', 'PRIVATE', '--password', 'PRIVATE', '--write-subs', '--sub-langs', 'it-IT', '--prefer-ffmpeg', '--no-check-certificate', '--cookies', 'PathToCookieFile\\cookies.txt', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/109.0.0.0 Safari/537.36 Edg/109.0.1518.78', '--verbose'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version stable@2023.07.06 [b532a3481] (py2exe) [debug] Python 3.11.0 (CPython AMD64 64bit) - Windows-10-10.0.19045 (OpenSSL 1.1.1q 5 Jul 2022) [debug] exe versions: ffmpeg git-2020-08-02-b48397e, ffprobe git-2020-08-02-b48397e, rtmpdump 2.3 [debug] Optional libraries: brotli-1.0.9, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3 [debug] Proxy map: {} [debug] Loaded 1855 extractors [crunchyroll] Extracting URL: https://www.crunchyroll.com/it/watch/G50UZ10D5 [crunchyroll] Retrieving main page (lang=it/) ERROR: [crunchyroll] G50UZ10D5: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: Forbidden>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.pyc", line 715, in extract File "yt_dlp\extractor\crunchyroll.pyc", line 320, in _real_extract File "yt_dlp\extractor\crunchyroll.pyc", line 114, in _call_api File "yt_dlp\extractor\crunchyroll.pyc", line 99, in _call_base_api File "yt_dlp\extractor\crunchyroll.pyc", line 69, in _update_query File "yt_dlp\extractor\common.pyc", line 1118, in _download_webpage File "yt_dlp\extractor\common.pyc", line 1069, in download_content File "yt_dlp\extractor\common.pyc", line 903, in _download_webpage_handle File "yt_dlp\extractor\common.pyc", line 860, in _request_webpage File "yt_dlp\networking\_urllib.pyc", line 432, in _send File "urllib\request.pyc", line 525, in open File "urllib\request.pyc", line 634, in http_response File "urllib\request.pyc", line 563, in error File "urllib\request.pyc", line 496, in _call_chain File "urllib\request.pyc", line 643, in http_error_default urllib.error.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\YoutubeDL.pyc", line 4059, in urlopen File "yt_dlp\networking\common.pyc", line 89, in send File "yt_dlp\networking\_helper.pyc", line 203, in wrapper File "yt_dlp\networking\common.pyc", line 297, in send File "yt_dlp\networking\_urllib.pyc", line 437, in _send yt_dlp.networking.exceptions.HTTPError: HTTP Error 403: Forbidden The above exception was the direct cause of the following exception: Traceback (most recent call last): File "yt_dlp\extractor\common.pyc", line 847, in _request_webpage File "yt_dlp\YoutubeDL.pyc", line 4078, in urlopen yt_dlp.networking.exceptions._CompatHTTPError: HTTP Error 403: Forbidden
This is the prior error 403 this topic is about. To workaround it use a real browser cache (command --cookies-from-browser). Firefox and Edge are known to work (I didn't tested Edge but Firefox works here). You need to use that with it's real --user-agent. You can get what it the browser user agent by searching "What is my user agent" in duckduckgo search engine. Then log in on CR using firefox and try to use yt-dlp (cookies last for 30 minutes)
bashonly fix only the initial state, for 403 the workaround is to use a cookie form the browser.
This is the prior error 403 this topic is about. To workaround it use a real browser cache (command --cookies-from-browser). Firefox and Edge are known to work (I didn't tested Edge but Firefox works here). You need to use that with it's real --user-agent. You can get what it the browser user agent by searching "What is my user agent" in duckduckgo search engine. Then log in on CR using firefox and try to use yt-dlp (cookies last for 30 minutes)
Ah so the cookie generated by yt-dlp will not work? Also I notice it now that my cookie generated is edited by yt-dlp when I use --cookie 'pathtocookie'... not sure why... I used to just create that file from my webview in my own interface so I'm not sure if I can still use my own or of I need to use yt-dlp generated file
bashonly fix only the initial state, for 403 the workaround is to use a cookie form the browser.
Ah so the cookie generated by yt-dlp will not work? Also I notice it now that my cookie generated is edited by yt-dlp when I use --cookie 'pathtocookie'... not sure why... I used to just create that file from my webview in my own interface so I'm not sure if I can still use my own or of I need to use yt-dlp generated file
yt-dlp isn't a real browser so it won't update the cookies ever. Use firefox instead and the commands I wrote in my previous comment.
yt-dlp isn't a real browser so it won't update the cookies ever. User firefox instead and the commands I wrote in my previous comment.
Ok I got the initial state error now (so I should be able to fix it with bashonly fix But I still got a question. Since I can still generate a cookie.txt file from a real browser (a webview that use edge)... how can I use that file instead of the one generated by yt-dlp or from a supported browser? In previous version yt-dlp did not generate a cookie.txt file when I used --cookie it just readed my file... now it will ovverride it... I think my own cookie file should work file. Do you know if there is any way to use it?
yt-dlp isn't a real browser so it won't update the cookies ever. User firefox instead and the commands I wrote in my previous comment.
Ok I got the initial state error now (so I should be able to fix it with bashonly fix But I still got a question. Since I can still generate a cookie.txt file from a real browser (a webview that use edge)... how can I use that file instead of the one generated by yt-dlp or from a supported browser? In previous version yt-dlp did not generate a cookie.txt file when I used --cookie it just readed my file... now it will ovverride it... I think my own cookie file should work file. Do you know if there is any way to use it?
Because CR limit cookies to 30 minutes you'll have to generate cookies.txt whatever you want to use yt-dlp. It's an extra step doing that way. Just leave Firefox logged at your account and open CR site when you want to use yt-dlp. If you have a script that calls yt-dlp, add a line to open firefox on CR site and that's it.
yt-dlp isn't a real browser so it won't update the cookies ever. User firefox instead and the commands I wrote in my previous comment.
Ok I got the initial state error now (so I should be able to fix it with bashonly fix But I still got a question. Since I can still generate a cookie.txt file from a real browser (a webview that use edge)... how can I use that file instead of the one generated by yt-dlp or from a supported browser? In previous version yt-dlp did not generate a cookie.txt file when I used --cookie it just readed my file... now it will ovverride it... I think my own cookie file should work file. Do you know if there is any way to use it?
Because CR limit cookies to 30 minutes you'll have to generate cookies.txt whatever you want to use yt-dlp. It's an extra step doing that way. Just leave Firefox logged at your account and open CR site when you want to use yt-dlp. If you have a script that calls yt-dlp, add a line to open firefox on CR site and that's it.
I don't need to... My interface just extract a cookies.txt file pressing a button from my webview and it will also extract the user-agent at the same time... I made it just for me to use yt-dl (before yt-dlp). If I can still use that cookie.txt file I will... if I cannot use it anymore because there is no way to do it I will change the interface... I just want to be sure I have to change it. No way to use my own cookie.txt file?
PS. I can confirm that it work with Opera browser too.
yt-dlp isn't a real browser so it won't update the cookies ever. User firefox instead and the commands I wrote in my previous comment.
Ok I got the initial state error now (so I should be able to fix it with bashonly fix But I still got a question. Since I can still generate a cookie.txt file from a real browser (a webview that use edge)... how can I use that file instead of the one generated by yt-dlp or from a supported browser? In previous version yt-dlp did not generate a cookie.txt file when I used --cookie it just readed my file... now it will ovverride it... I think my own cookie file should work file. Do you know if there is any way to use it?
Because CR limit cookies to 30 minutes you'll have to generate cookies.txt whatever you want to use yt-dlp. It's an extra step doing that way. Just leave Firefox logged at your account and open CR site when you want to use yt-dlp. If you have a script that calls yt-dlp, add a line to open firefox on CR site and that's it.
I don't need to... My interface just extract a cookies.txt file pressing a button from my webview and it will also extract the user-agent at the same time... I made it just for me to use yt-dl (before yt-dlp). If I can still use that cookie.txt file I will... if I cannot use it anymore because there is no way to do it I will change the interface... I just want to be sure I have to change it. No way to use my own cookie.txt file?
If you want to keep changes to a minimun then add a line to open CR site at the desired browser and a wait time before geting the cookies.txt. This is what I did so solve mine (only that I use --cookies-from-browser instead)
If you want to keep changes to a minimun then add a line to open CR site at the desired browser and a wait time before geting the cookies.txt. This is what I did so solve mine (only that I use --cookies-from-browser instead)
Like I said, that's ok if there is no way to pass my own cookies.txt file. Since I use a real browser (a webview that use edge) to make it I think it should work... but only if there is an alternative to --cookie that will not override my own file. I don't really want to open another browser if I can avoid it.
@federicorosso1993 you can pass your own cookie file with --cookies
option, the option's argument needs to be the path to the cookies file
but the cookies need to be exported within the last 30 minutes from a browser matching your passed --user-agent
If you want to keep changes to a minimun then add a line to open CR site at the desired browser and a wait time before geting the cookies.txt. This is what I did so solve mine (only that I use --cookies-from-browser instead)
Like I said, that's ok if there is no way to pass my own cookies.txt file. Since I use a real browser (a webview that use edge) to make it I think it should work... but only if there is an alternative to --cookie that will not override my own file. I don't really want to open another browser if I can avoid it.
Ah sorry. I don't know a way to prevent changes to cookies.txt. Afaik yt-dlp isn't supose to change that but I don't know if it actualy change it. To be sure yt-dlp won't change it you can mark the file as read only before calling yt-dlp.
@Killua-Zolldyck a proper fix is being worked on currently
you can pass your own cookie file with
--cookies
option, the option's argument needs to be the path to the cookies file
I did it but my own file that just start with
# Netscape HTTP Cookie File
get override by yt-dlp cookies file that start with
# Netscape HTTP Cookie File
# This file is generated by yt-dlp. Do not edit.
I pass it like this --cookies "C:\Users\myusername\cookies.txt"
DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
Checklist
Region
Brazil
Provide a description that is worded well enough to be understood
CR probably broke CR plugin... again.
I tryed to download lists for taking links for new episodes just to get a 403 error. Downloading episodes also don't work anymore.
KODI plugin is working fine. Website is working fine. Tested the PS4 app and also working fine.
Provide verbose output that clearly demonstrates the problem
yt-dlp -vU <your command line>
)'verbose': True
toYoutubeDL
params instead[debug] Command-line config
) and insert it belowComplete Verbose Output