Closed tamodolo closed 1 week ago
The 403 errors are coming from Cloudflare; their fingerprinting is determining that yt-dlp is not a web browser and blocking its requests to Crunchyroll.
There is a workaround for this. It requires cookies from a browser with the same IP address that you will be using with yt-dlp. Here are the steps:
Refresh your cookies in browser by loading a Crunchyroll video page until the video starts playing. If you have not done this within the past 30 minutes, you will need to do it again
Find your browser's User-Agent string; you need the entire string (starting with Mozilla/5.0
) and it needs to be up-to-date (i.e. if your browser updates, you'll need to get its new UA string with the current version)
about:support
chrome://version
(or your vendor prefix like brave://version
, etc)Pass your browser's user-agent to yt-dlp with --user-agent "USERAGENT"
along with --cookies-from-browser firefox
USERAGENT
with your actual full user-agent string you got from step 2firefox
with the browser that you are using to browse Crunchyroll--cookies
option, e.g. --cookies cookies.txt
. Note that the cookies need to be exported from a fresh browser session (see step 1) within the past 30 minutes. See the wiki for browser extension recommendationsPS: If you get an Unable to extract initial state
error, you need to run yt-dlp --update-to nightly
or install master branch with pip
PPS: This appears to not work with MS edge. Use chrome/firefox
This looks like a cloudflare issue, and I can't reproduce it. This has happened before, and has been temporary. Hopefully that's the case again.
Something you could try:
- refresh your CR cookies in browser
- find your browser's user-agent string; you can type "what is my user-agent" into duckduckgo, google, etc
- pass your browser's user-agent with
--user-agent "USERAGENT"
along with--cookies-from-browser
Ok, cookies refreshed. Also tested with another browser. user agent added to code
Still forbidden...
yt-dlp.exe -vU --cookies-from-browser vivaldi --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Vivaldi/6.1.3035.111" --sub-langs pt-BR --remux-video mkv --embed-subs -a "C:\animetemp\queue2.txt" -o "%(season)s - %(episode_number)s.%(ext)s" --paths "C:\animetemp"
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'vivaldi', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36 Vivaldi/6.1.3035.111', '--sub-langs', 'pt-BR', '--remux-video', 'mkv', '--embed-subs', '-a', 'C:\\animetemp\\queue2.txt', '-o', '%(season)s - %(episode_number)s.%(ext)s', '--paths', 'C:\\animetemp']
[debug] Batch file urls: ['https://www.crunchyroll.com/pt-br/watch/G0DUNJKE4/scrambling-and-dripping']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2023.06.27.202841 [5b4b92769] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22621-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2022-12-04-git-6c814093d8-full_build-www.gyan.dev (setts), ffprobe 2022-12-04-git-6c814093d8-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[Cookies] Extracting cookies from vivaldi
[debug] Extracting cookies from: "C:\Users\tamod\AppData\Local\Vivaldi\User Data\Default\Network\Cookies"
[debug] Found local state file at "C:\Users\tamod\AppData\Local\Vivaldi\User Data\Local State"
[Cookies] Extracted 2819 cookies from vivaldi
[debug] cookie version breakdown: {'v10': 2819, 'other': 0, 'unencrypted': 0}
[debug] Proxy map: {}
[debug] Loaded 1852 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Available version: nightly@2023.06.27.202841, Current version: nightly@2023.06.27.202841
Current Build Hash: 9eb7dae08abf2a5c725c0635519bb9568f0892e6b2145850e16ecff62186b637
yt-dlp is up to date (nightly@2023.06.27.202841)
[crunchyroll] Extracting URL: https://www.crunchyroll.com/pt-br/watch/G0DUNJKE4/scrambling-and-dripping
[crunchyroll] Retrieving main page (lang=pt-br/)
ERROR: [crunchyroll] G0DUNJKE4: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 708, in extract
File "yt_dlp\extractor\crunchyroll.py", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.py", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.py", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.py", line 69, in _update_query
File "yt_dlp\extractor\common.py", line 1111, in _download_webpage
File "yt_dlp\extractor\common.py", line 1062, in download_content
File "yt_dlp\extractor\common.py", line 896, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 853, in _request_webpage
File "yt_dlp\extractor\common.py", line 835, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 3786, in urlopen
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
I can confirm that this is an issue at the moment. All links load and play fine in Chrome, Edge, and Firefox. In US Central timezone region. When using yt-dlp, all links produce the same error.
Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>);
This looks like a cloudflare issue, and I can't reproduce it. This has happened before, and has been temporary. Hopefully that's the case again.
Something you could try:
- refresh your CR cookies in browser
- find your browser's user-agent string; you can type "what is my user-agent" into duckduckgo, google, etc
- pass your browser's user-agent with
--user-agent "USERAGENT"
along with--cookies-from-browser
FYI: The --user-agent option hasn't worked with Crunchyroll for the last several months.
The --user-agent option hasn't worked with Crunchyroll for the last several months.
It does though? Use it and add --print-traffic
to your command. You'll see the UA you passed being sent with requests
still can't repro here
$ yt-dlp -F --cookies-from-browser firefox::dlp2 'https://www.crunchyroll.com/pt-br/watch/G0DUNJKE4/scrambling-and-dripping'
[Cookies] Extracting cookies from firefox
[Cookies] Extracted 64 cookies from firefox
[crunchyroll] Extracting URL: https://www.crunchyroll.com/pt-br/watch/G0DUNJKE4/scrambling-and-dripping
[crunchyroll] Retrieving main page (lang=pt-br/)
[crunchyroll] Authenticating with grant_type=etp_rt_cookie
[crunchyroll] G0DUNJKE4: Downloading object info JSON (cms)
[crunchyroll] Retrieving signed policy
[crunchyroll] G0DUNJKE4: Downloading stream info JSON (signed cms)
[crunchyroll] To get all formats of a hardsub language, use "--extractor-args crunchyrollbeta:hardsub=<language_code or all>". See https://github.com/yt-dlp/yt-dlp#crunchyrollbeta-crunchyroll for more info
[crunchyroll] G0DUNJKE4: Downloading adaptive_hls HLS manifest
[crunchyroll] G0DUNJKE4: Downloading chapter info
[info] Available formats for G0DUNJKE4:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC ACODEC MORE INFO
───────────────────────────────────────────────────────────────────────────────────────────────
adaptive_hls-562 mp4 428x240 24 │ ~ 97.46MiB 562k m3u8 │ avc1.42c015 mp4a.40.2 [pt-BR]
adaptive_hls-1092 mp4 640x360 24 │ ~189.39MiB 1093k m3u8 │ avc1.4d401e mp4a.40.2 [pt-BR]
adaptive_hls-2091 mp4 848x480 24 │ ~362.62MiB 2092k m3u8 │ avc1.4d401f mp4a.40.2 [pt-BR]
adaptive_hls-4121 mp4 1280x720 24 │ ~714.46MiB 4122k m3u8 │ avc1.640028 mp4a.40.2 [pt-BR]
adaptive_hls-8108 mp4 1920x1080 24 │ ~ 1.37GiB 8108k m3u8 │ avc1.640028 mp4a.40.2 [pt-BR]
Yeah, I used to use the --user-agent option with Crunchyroll out of habit since that was the best way to use the old platform, but several months back it caused it to throw a similar 403 forbidden error. Took it out and it has worked fine since then.
But yeah, I still get the same error/output as OP. It does not appear that you are using the Windows version? I'm not sure, but if not maybe you can try it from a Windows cmd shell and see how it goes.
yt-dlp -vU -F --extractor-args "crunchyrollbeta:hardsub=en-US,none" --cookies "cookies-crunchyroll-com.txt" -a "LinksCR.txt"
[debug] Command-line config: ['-vU', '-F', '--extractor-args', 'crunchyrollbeta:hardsub=en-US,none', '--cookies', 'cookies-crunchyroll-com.txt', '-a', 'LinksCR.txt']
[debug] Batch file urls: ['https://www.crunchyroll.com/pt-br/watch/G0DUNJKE4/scrambling-and-dripping']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-106957-g27cffd16aa-20220521 (setts), ffprobe N-106957-g27cffd16aa-20220521
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[debug] Proxy map: {}
[debug] Loaded 1851 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Available version: stable@2023.06.22, Current version: stable@2023.06.22
Current Build Hash: 37e7ffe204309357cfd1388b0e2c782a30e293ddd0f2761a9a8f6afa185b3566
yt-dlp is up to date (stable@2023.06.22)
[crunchyroll] Extracting URL: https://www.crunchyroll.com/pt-br/watch/G0DUNJKE4/scrambling-and-dripping
[crunchyroll] Retrieving main page (lang=pt-br/)
ERROR: [crunchyroll] G0DUNJKE4: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 708, in extract
File "yt_dlp\extractor\crunchyroll.py", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.py", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.py", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.py", line 69, in _update_query
File "yt_dlp\extractor\common.py", line 1111, in _download_webpage
File "yt_dlp\extractor\common.py", line 1062, in download_content
File "yt_dlp\extractor\common.py", line 896, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 853, in _request_webpage
File "yt_dlp\extractor\common.py", line 835, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 3786, in urlopen
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
Switching from Edge to a clean install of Firefox did solve the problem for me. Just to let you know.
Edit: The usage of --user-agent was necessary to make it work on Firefox! Using --user-agent with Edge did not solve the problem.
I had the same issue yesterday evening, but later in the night, it fixed itself only to appear now back.
It is hard to conclude if it is Crunchyroll's fault or Claudfare, but with random on/off and not for everyone, I think more in Cloudfare direction.
This looks like a cloudflare issue, and I can't reproduce it. This has happened before, and has been temporary. Hopefully that's the case again.
Something you could try:
1. refresh your CR cookies in browser 2. find your browser's user-agent string; you can type "what is my user-agent" into duckduckgo, google, etc 3. pass your browser's user-agent with `--user-agent "USERAGENT"` along with `--cookies-from-browser`
Thank you very much, this worked for me on Firefox on Linux.
This looks like a cloudflare issue, and I can't reproduce it. This has happened before, and has been temporary. Hopefully that's the case again.
Something you could try:
- refresh your CR cookies in browser
- find your browser's user-agent string; you can type "what is my user-agent" into duckduckgo, google, etc
- pass your browser's user-agent with
--user-agent "USERAGENT"
along with--cookies-from-browser
How can I do this on the linux terminal? Is there anyway to copy cookies from other PC to linux?
How can I do this on the linux terminal? Is there anyway to copy cookies from other PC to linux?
See the wiki for browser extension recommendations, or you can run yt-dlp --cookies-from-browser firefox --cookies cookies.txt
on the other PC and it will dump all of its cookies to cookies.txt
(so be very careful with this file!).
However, if the linux pc and the other pc do not have the same IP address, the chances of this --user-agent
+ --cookies
method working drop significantly
Switching from Edge to a clean install of Firefox did solve the problem for me. Just to let you know.
Edit: The usage of --user-agent was necessary to make it work on Firefox! Using --user-agent with Edge did not solve the problem.
I confirm that using firefox as source of cookies and user-agent works. This seems to be a change on how CR are handling incoming requests. Probably related on how it is interpreting what is "genuine" and what is not. This may be caused by how yt-dlp handle this. I have no knoledge in this but at least it's a clue of what could be the problem now that CR seems to be exploiting this...
I tryed to overflow playlist requests by teling my script to check for new episodes in all tittles to see if CR is banning by lower number of requests (it's 50 last time I got one) and that's not the case.
How can I do this on the linux terminal? Is there anyway to copy cookies from other PC to linux?
See the wiki for browser extension recommendations, or you can run
yt-dlp --cookies-from-browser firefox --cookies cookies.txt
on the other PC and it will dump all of its cookies tocookies.txt
(so be very careful with this file!).However, if the linux pc and the other pc do not have the same IP address, the chances of this
--user-agent
+--cookies
method working drop significantly
It seems now can only work with --cookies-from-browser firefox
which means directly using firefox's cookie, when only using user&password-u -p
always get 403 error, seems the login func is broken, which means it can't work without a desktop browser.
And it also get 403 when using a cookie from a firefox browser with a different IP address.
For some context: this 403 issue originally showed up last year, right after the site fully migrated from the old API to the "beta" API. I wrote about it here: https://github.com/yt-dlp/yt-dlp/pull/5294#issuecomment-1289735453 It has popped up a few times since then, but those times it was only a transient/temporary issue. It seems to be more widespread and long-lasting this time.
It's cloudflare fingerprinting. But if you have cookies from a browser that already has "passed" the cloudflare test, you can circumvent the fingerprint check by passing those cookies and the same user-agent as the browser. It's possible that only Firefox has been reported to work because there are additional http header checks done for chromium-based browsers (Sec-CH-UA
etc maybe?)
For some context: this 403 issue originally showed up last year, right after the site fully migrated from the old API to the "beta" API. I wrote about it here: #5294 (comment) It has popped up a few times since then, but those times it was only a transient/temporary issue. It seems to be more widespread and long-lasting this time.
It's cloudflare fingerprinting. But if you have cookies from a browser that already has "passed" the cloudflare test, you can circumvent the fingerprint check by passing those cookies and the same user-agent as the browser. It's possible that only Firefox has been reported to work because there are additional http header checks done for chromium-based browsers (
Sec-CH-UA
etc maybe?)
I got a warning at CR site today about merging lists from Funimation. Maybe that's why things are strange. But also can't discard that yt-dlp have a problem because yt-dlp can't login directly anymore and no browser is failing in accessing CR.
yt-dlp can't login directly anymore
This is because of the cloudflare fingerprinting. It's detecting that yt-dlp isn't a browser and blocking it. Hence the need to pass the user-agent and cookies from a browser.
yt-dlp can't login directly anymore
This is because of the cloudflare fingerprinting. It's detecting that yt-dlp isn't a browser and blocking it. Hence the need to pass the user-agent and cookies from a browser.
I tested the MrKrabat's KODI plugin (that isn't update for some years now) and it's working fine. And also isn't a browser. So maybe something else is the problem.
@tamodolo are you still able to download from CR? I've tried downloading firefox to test on it but still getting 403.
cmd line: yt-dlp -vU "https://www.crunchyroll.com/pt-br/watch/GJWU2EK7Z/sparkling-fresh" --user-agent {myUserAgent} --cookie-from-web firefox
doesn't matter firefox // chrome // edge they are the same issue.
I confirm that using firefox as source of cookies and user-agent works.
Seems after a while it also get 403 then need to open the browser and refresh the page then it work again.
I suspect that the Crunchyroll Cookies have been shortened to be valid for only 30 minutes. I was able to get downloads using the --user-agent "USERAGENT" along with --cookies-from-browser firefox, however it started to fail about 30 minutes in to downloads.
[download] 100% of 1.38GiB in 00:03:32 at 6.65MiB/s [Metadata] Adding metadata to "downloads\Wandering_Witch_-_The_Journey_of_Elaina\01\002 - The_Land_of_Mages(8).mp4" [crunchyroll] Extracting URL: https://www.crunchyroll.com/watch/G14U4E27X/the-girl-as-pretty-as-a-flower--bottled-happiness [crunchyroll] Authenticating with grant_type=etp_rt_cookie ERROR: [crunchyroll] G14U4E27X: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
I then refreshed Firefox and played a short video segment and then it allowed me to download again for 30 minutes before failing again (Approx 8 episodes). Rinse and repeat the actions.
If you are playing through a recognized browser the browser would automatically update the authentication cookie upon trying to play a video. yt-dlp does not automatically refresh cookies from the site.
I am now limiting myself to 7 - 8 downloads at a time. [Batch list/ not concurrenty]
@tamodolo are you still able to download from CR? I've tried downloading firefox to test on it but still getting 403.
cmd line: yt-dlp -vU "https://www.crunchyroll.com/pt-br/watch/GJWU2EK7Z/sparkling-fresh" --user-agent {myUserAgent} --cookie-from-web firefox
doesn't matter firefox // chrome // edge they are the same issue.
@BraveSail got a working solution for that. It's was the same here until I opened firefox.
@livelife68 I run 2 downloads at once. But not dare to try more than that.
edit: nightly release broke the season alias....
I've got the same problem from germany with credential and also cookie authentication. On Monday it still worked, so i think its like suggested that crunchyroll/cloudflare changed something.
credentials:
[debug] Command-line config: ['-vU', '-f', 'best', '--user-agent', 'Firefox', '--username', 'PRIVATE', '--password', 'PRIVATE', '--ffmpeg-location', '.\\ffmpeg\\bin\\ffmpeg.exe', '--batch-file', '.\\links.txt']
[debug] Batch file urls: ['https://www.crunchyroll.com/de/watch/GG1U2ZNEV/ray-of-despair-ray-of-hope', 'https://www.crunchyroll.com/de/watch/G7PU4J108/miranjo-and-the-demon--the-prince-and-his-treasures']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2022-03-24-git-28d011516b-full_build-www.gyan.dev (setts), ffprobe 2022-03-24-git-28d011516b-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[debug] Proxy map: {}
WARNING: "-f best" selects the best pre-merged format which is often not the best option.
To let yt-dlp download and merge the best available formats, simply do not pass any format selection.
If you know what you are doing and want only the best pre-merged format, use "-f b" instead to suppress this warning
[debug] Loaded 1851 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Available version: stable@2023.06.22, Current version: stable@2023.06.22
Current Build Hash: 37e7ffe204309357cfd1388b0e2c782a30e293ddd0f2761a9a8f6afa185b3566
yt-dlp is up to date (stable@2023.06.22)
[crunchyroll] Getting session id
[crunchyroll] Logging in
[crunchyroll] Extracting URL: https://www.crunchyroll.com/de/watch/GG1U2ZNEV/ray-of-despair-ray-of-hope
[crunchyroll] Retrieving main page (lang=de/)
ERROR: [crunchyroll] GG1U2ZNEV: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 708, in extract
File "yt_dlp\extractor\crunchyroll.py", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.py", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.py", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.py", line 69, in _update_query
File "yt_dlp\extractor\common.py", line 1111, in _download_webpage
File "yt_dlp\extractor\common.py", line 1062, in download_content
File "yt_dlp\extractor\common.py", line 896, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 853, in _request_webpage
File "yt_dlp\extractor\common.py", line 835, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 3786, in urlopen
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
cookies (also tried the --cookie-from-web firefox
option):
[debug] Command-line config: ['-vU', '-f', 'best', '--user-agent', 'firefox', '--cookies', '.\\cookies.txt', '--ffmpeg-location', '.\\ffmpeg\\bin\\ffmpeg.exe', '--batch-file', '.\\links.txt']
[debug] Batch file urls: ['https://www.crunchyroll.com/de/watch/GG1U2ZNEV/ray-of-despair-ray-of-hope', 'https://www.crunchyroll.com/de/watch/G7PU4J108/miranjo-and-the-demon--the-prince-and-his-treasures']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2022-03-24-git-28d011516b-full_build-www.gyan.dev (setts), ffprobe 2022-03-24-git-28d011516b-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[debug] Proxy map: {}
WARNING: "-f best" selects the best pre-merged format which is often not the best option.
To let yt-dlp download and merge the best available formats, simply do not pass any format selection.
If you know what you are doing and want only the best pre-merged format, use "-f b" instead to suppress this warning
[debug] Loaded 1851 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Available version: stable@2023.06.22, Current version: stable@2023.06.22
Current Build Hash: 37e7ffe204309357cfd1388b0e2c782a30e293ddd0f2761a9a8f6afa185b3566
yt-dlp is up to date (stable@2023.06.22)
[crunchyroll] Extracting URL: https://www.crunchyroll.com/de/watch/GG1U2ZNEV/ray-of-despair-ray-of-hope
[crunchyroll] Retrieving main page (lang=de/)
ERROR: [crunchyroll] GG1U2ZNEV: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 708, in extract
File "yt_dlp\extractor\crunchyroll.py", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.py", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.py", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.py", line 69, in _update_query
File "yt_dlp\extractor\common.py", line 1111, in _download_webpage
File "yt_dlp\extractor\common.py", line 1062, in download_content
File "yt_dlp\extractor\common.py", line 896, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 853, in _request_webpage
File "yt_dlp\extractor\common.py", line 835, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 3786, in urlopen
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
Now I can only open the firefox and login to crunchyroll then use yt-dlp with --cookie-from-web firefox
and same IP address to avoid the 403 error. Seems can only download normally in a short time.
My Firefox useragent string was out of date by a version. Updating it to match the browser the cookies came from solved it.
So i am on Windows 10 on Version 2023.06.22 of yt-dlp using following command (cookies on Firefox already remove and renewed for crunchyroll):
.\yt-dlp.exe -vU -f best --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/114.0" --cookies-from-browser firefox --cookies .\auto_cookies.txt --ffmpeg-location .\ffmpeg\bin\ffmpeg.exe --batch-file .\links.txt
and trying to download some german dubbed episodes and getting the following error:
ERROR: [crunchyroll] GG1U2ZNEV: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 708, in extract
File "yt_dlp\extractor\crunchyroll.py", line 320, in _real_extract
File "yt_dlp\extractor\crunchyroll.py", line 114, in _call_api
File "yt_dlp\extractor\crunchyroll.py", line 99, in _call_base_api
File "yt_dlp\extractor\crunchyroll.py", line 69, in _update_query
File "yt_dlp\extractor\common.py", line 1111, in _download_webpage
File "yt_dlp\extractor\common.py", line 1062, in download_content
File "yt_dlp\extractor\common.py", line 896, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 853, in _request_webpage
File "yt_dlp\extractor\common.py", line 835, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 3786, in urlopen
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 403: Forbidden
Anybody got any suggestions what the problem could be?
Update:
Ok it works now but not with the --cookies-from-browser firefox
option. I've extracted the cookies with a browser addon on Firefox and used that cookie file and that worked. So the command that worked for me:
.\yt-dlp.exe -vU -f best --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/114.0" --cookies .\cookies.txt --ffmpeg-location .\ffmpeg\bin\ffmpeg.exe --batch-file .\links.txt
@moeter87-real Could you test this method after half hour or an hour without renew the cookie manually?
Update:
Ok it works now but not with the
--cookies-from-browser firefox
option. I've extracted the cookies with a browser addon on Firefox and used that cookie file and that worked. So the command that worked for me:
Same here. I use the Add-On cookies.txt, login in Crunchyroll, open a Stream and play it for five Seconds. Then export the Cookies with the Add-On and start the download
yt-dlp --cookies cookies.txt \ --user-agent "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" \ --write-description \ --write-thumbnail \ --sub-langs "de-DE" \ --extractor-args "crunchyrollbeta:hardsub=de-DE" $URL1 $URL2 $URL3
In a second window i tried the same. And: Wow this works.
moeter87-real: You are my Hero. The Tip with the Add-On helps. Thank You.
EDIT: After 30 Minutes you get ERROR 403 again. After refresh it works.
I used the "--user-agent" of my browser in the command, and it worked correctly! Thank you for the tips.
So i am on Windows 10 on Version 2023.06.22 of yt-dlp using following command (cookies on Firefox already remove and renewed for crunchyroll):
.\yt-dlp.exe -vU -f best --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/114.0" --cookies-from-browser firefox --cookies .\auto_cookies.txt --ffmpeg-location .\ffmpeg\bin\ffmpeg.exe --batch-file .\links.txt
I don't use cookies from browser but if you do aren't you meant to omit --cookies ?
Update:
EDIT: After 30 Minutes you get ERROR 403 again. After refresh it works.
Confirmed but not only that. It seems that the cookies get updated every time a new video is played.
Strange, the error occurred again I thought it was resolved just by using the --user-agent, I have no idea what it could be.
Strange, the error occurred again I thought it was resolved just by using the --user-agent, I have no idea what it could be.
You need to refresh the cookies directly on firefox. Open something and it'll work again. yt-dlp need to implement this feature to work without this.
It worked when I did it the way you mentioned. However, for me, it only works if I use the same type of user agent that was used to generate the cookies. In other words, cookies from the Firefox browser only worked for me with "--user-agent "firefox"", whereas using the "edge" agent resulted in an error 🤔
To clarify it to everyone: Crunchyroll has changed the access to the videos. You need a cookie that is at max. 30 minutes old and includes your login to the page. The cookie itself is updated everytime you open a CR Website in your browser. If you open a link with yt-dlp the cookies aren't updated, because yt-dlp is not a browser itself and just uses the cookies of the browser you give in the options. And at least on my system, I have to use Firefox at the moment, because Edge does not work, even if I use --cookies-from-browser edge and the correct --user-agent option for my browser.
I don't think, this can be updated in yt-dlp easily to work like before. yt-dlp isn't a full feature web-browser and I wonder if it will be possible at all to keep the cookie updated by yt-dlp.
So as far as I see, there's only one option to download from CR at this time:
If you know any other working scenario, feel free to report here.
To clarify it to everyone: Crunchyroll has changed the access to the videos. You need a cookie that is at max. 30 minutes old and includes your login to the page. The cookie itself is updated everytime you open a CR Website in your browser. If you open a link with yt-dlp the cookies aren't updated, because yt-dlp is not a browser itself and just uses the cookies of the browser you give in the options. And at least on my system, I have to use Firefox at the moment, because Edge does not work, even if I use --cookies-from-browser edge and the correct --user-agent option for my browser.
I don't think, this can be updated in yt-dlp easily to work like before. yt-dlp isn't a full feature web-browser and I wonder if it will be possible at all to keep the cookie updated by yt-dlp.
So as far as I see, there's only one option to download from CR at this time:
- use Firefox
- have an up to date cookie for CR website (max. 30 minutes old)
- use --cookies-from-browser firefox
- use --user-agent "..."
- the value of the --user-agent option must fit to the browser, you generated the cookie with
If you know any other working scenario, feel free to report here.
Or yt-dlp could implement the console API instead. But the mice and cat struggle probably will make it won't work anymore for MrKrabat's KODI unoficial plugin. Also, I don't know if it's possible to download soft subs with this method.
Just to be fair, CR is a very unreliable service. If they just work at any time I would be much less inclined to use a method like this just to be sure that I'll be able to watch something... Also, not being able to fix resolution and not alow it to be changed during stream makes me mad. And CR more often than not will send 720p even when 1080p is possible just because it's cheaper to them. And that's my personal reason to give CR the middle finger. If they at some time give me ways to whatch it in glorious max quality then I'll probably won't use yt-dlp anymore...
Hello,
I found a workaround.
I try with ytdl-patched version stable@2023.06.11.1686453250 (and 2023.07.02.1688268193)
ytdl-patched --cookies-from-browser firefox --socket-timeout 120 --cookies cookies.txt -U https://www.crunchyroll.com/fatestrange-fake-whispers-of-dawn-/fatestrange-fake-whispers-of-dawn-inconnue-904607 --no-download -s --user-agent 'Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0' --no-update
[Cookies] Extracting cookies from firefox
[Cookies] Extracted 32 cookies from firefox
[generic] Extracting URL: https://www.crunchyroll.com/fr/fatestrange-fake-whispers-of-dawn-/fatestrange-fake-whispers-of-dawn-inconnue-904607
[generic] fatestrange-fake-whispers-of-dawn-inconnue-904607: Downloading webpage
[redirect] Following redirect to https://www.crunchyroll.com/watch/GX9UQZE17/
[crunchyroll] Extracting URL: https://www.crunchyroll.com/watch/GX9UQZE17/
[crunchyroll] Retrieving main page (lang=None)
[crunchyroll] Authenticating with grant_type=etp_rt_cookie
[crunchyroll] GX9UQZE17: Downloading object info JSON (cms)
[crunchyroll] Retrieving signed policy
[crunchyroll] GX9UQZE17: Downloading stream info JSON (signed cms)
[crunchyroll] To get all formats of a hardsub language, use "--extractor-args crunchyrollbeta:hardsub=<language_code or all>". See https://github.com/yt-dlp/yt-dlp#crunchyrollbeta-crunchyroll for more info
[crunchyroll] GX9UQZE17: Downloading adaptive_hls HLS manifest
[crunchyroll] GX9UQZE17: Downloading chapter info
[info] GX9UQZE17: Downloading 1 format(s): adaptive_hls-8100
But if i try with http:// it doesn't work anymore.
[Cookies] Extracting cookies from firefox
[Cookies] Extracted 32 cookies from firefox
[generic] Extracting URL: http://www.crunchyroll.com/fr/fatestrange-fake-whispers-of-dawn-/fatestrange-fake-whispers-of-dawn-inconnue-904607
[generic] fatestrange-fake-whispers-of-dawn-inconnue-904607: Downloading webpage
ERROR: [generic] None: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://github.com/ytdl-patched/ytdl-patched/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
For testing purposes you can try ;
sudo curl -L https://github.com/ytdl-patched/ytdl-patched/releases/latest/download/ytdl-patched -o /usr/local/bin/ytdl-patched
sudo chmod a+rx /usr/local/bin/ytdl-patched
Notes :
@ArturiaPendragon What is this ytdlp patched you're using?
@tamodolo ytdlp-patched 2023.07.02.1688268193 or 2023.06.11.1686453250 works for me.
Found here : https://github.com/ytdl-patched/ytdl-patched/releases/tag/2023.07.02.1688268193
@tamodolo ytdlp-patched 2023.07.02.1688268193 or 2023.06.11.1686453250 works for me.
Found here : https://github.com/ytdl-patched/ytdl-patched/releases/tag/2023.07.02.1688268193
Thanks! But just to be sure as I didn't understood the workaround you found was for as https is the default behavior on yt-dlp. The workaround you found was for the 30 minute cookies limit or for something else?
I use by default the RSS from Crunchyroll and it's http and not https, when i tried with https + ytdlp-patched with --cookies-from-browser firefox --cookies cookies.txt (and good user-agent and same IP) it's works again for me. (So, its my workaround for 403 Forbidden ;))
Use cookies and user agent from a legit browser. That worked for me.
Will there anything be done about this. Or can anything be done?
Will there anything be done about this. Or can anything be done?
As for a no input workaround I just added a pass to my script that open firefox on CR main site. It's working but it's not ideal. Just a dirty patch.
What about the ytdlp-patched WA?
They did one? afaik it works the same and have the same problem as MASTER.
My bad, I just read that it worked but not tried. Anyway the selenium trick restarting CR site did not worked for me
You need to use firefox as browser and pass the user agent. Something like this:
--cookies-from-browser firefox --user-agent "put it here quotes included". You can get your user agent on duckduckgo asking "what is my user agent"
I know, already using --cookies-from-browser and --user-agent since last year but after 8 dwl ore 30 minutes i need to refresh
yeah, that's why I put that in my script. I finish long queues with a firefox filled with tabs of CR.
For me it's working fine with ytdl-patched (works for last days)
from curl_cffi import requests
requests.get(
url,
impersonate="chrome110",
)
Setting like I do when requesting resolves this issue
from curl_cffi import requests requests.get( url, impersonate="chrome110", )
Setting like I do when requesting resolves this issue
Can you please elaborate on how/where exactly you perform this setting?
from curl_cffi import requests requests.get( url, impersonate="chrome110", )
Setting like I do when requesting resolves this issue
we have discussed about the possibility adding support for curl-impersonate sometime in the future. requires #2861 and some other things to be done before though.
Another way is to request api through beta-api.crunchyroll.com without being intercepted (but this method can only request api)
DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
Checklist
Region
Brazil
Provide a description that is worded well enough to be understood
CR probably broke CR plugin... again.
I tryed to download lists for taking links for new episodes just to get a 403 error. Downloading episodes also don't work anymore.
KODI plugin is working fine. Website is working fine. Tested the PS4 app and also working fine.
Provide verbose output that clearly demonstrates the problem
yt-dlp -vU <your command line>
)'verbose': True
toYoutubeDL
params instead[debug] Command-line config
) and insert it belowComplete Verbose Output