Closed ElektroStudios closed 1 year ago
Try --user-agent "Mozilla/5.0"
:
$ python3 -m youtube_dl -v -F 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479'
[debug] System config: ['--prefer-ffmpeg']
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', '-F', 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Git HEAD: 5860937e1
[debug] Python version 3.5.2 (CPython) - Linux-4.4.0-210-generic-i686-with-Ubuntu-16.04-xenial
[debug] exe versions: avconv 4.3, avprobe 4.3, ffmpeg 4.3, ffprobe 4.3
[debug] Proxy map: {}
[crunchyroll] 702479: Downloading webpage
ERROR: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
...
$ python3 -m youtube_dl -v -F 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479' --user-agent 'Mozilla/5.0'
[debug] System config: ['--prefer-ffmpeg']
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['-v', '-F', 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479', '--user-agent', 'Mozilla/5.0']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Git HEAD: 5860937e1
[debug] Python version 3.5.2 (CPython) - Linux-4.4.0-210-generic-i686-with-Ubuntu-16.04-xenial
[debug] exe versions: avconv 4.3, avprobe 4.3, ffmpeg 4.3, ffprobe 4.3
[debug] Proxy map: {}
[crunchyroll] 702479: Downloading webpage
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-esLA information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-itIT information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-enUS information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-ptBR information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-esES information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-deDE information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-frFR information
[crunchyroll] 702479: Downloading media info
WARNING: Unable to download XML: HTTP Error 404: Not Found
[info] Available formats for 702479:
format code extension resolution note
adaptive_hls-audio-jaJP-hardsub-esLA-561-0 mp4 428x240 [jaJP] 561k , avc1.42c015, 23.974fps, mp4a.40.2
...
adaptive_hls-audio-jaJP-8106-1 mp4 1920x1080 [jaJP] 8106k , avc1.640028, 23.974fps, mp4a.40.2 (best)
$
Some CDN gatekeeper like Cloudflare is breaking the web by blocking on user agent strings, before even looking at your cookies. Fortunately the latest specifications favour not sending an explicit user agent string and they don't come less explicit than Mozilla/5.0
.
Try
--user-agent "Mozilla/5.0"
:
Thanks for answer. It still gets HTTP Error 403: Forbidden
dirkf In your opinion, and only as a temporary workaround, it could be a matter of finding a working / non-blocked user agent string?.
Yes. Try being an old iPhone.
Yes. Try being an old iPhone.
Tried that user agent and a couple more found here (did not tested all). Error 403 all the time.
Did you use a VPN or a Proxy? Some of these servers are in a Cloudflare IP blocklist and will always get a forbidden response. If so try accessing one vid and if you succeed try another vpn server. Also you should not need your cookies if you want to download a free no-login show.
Regarding the User-Agent: I suggest useing the same User-Agent string as the Browser you exported the cookies from. Simply Type "my user agent" in google and you should get the correct user-agent.
Did you use a VPN or a Proxy? Some of these servers are in a Cloudflare IP blocklist and will always get a forbidden response. If so try accessing one vid and if you succeed try another vpn server. Also you should not need your cookies if you want to download a free no-login show.
Regarding the User-Agent: I suggest useing the same User-Agent string as the Browser you exported the cookies from. Simply Type "my user agent" in google and you should get the correct user-agent.
Thanks for answer.
I'm not behind any kind of proxy or VPN. I'm not using Wi-Fi, I'm on a home network using a single cable router. I don't have a firewall or antivirus (Windows built-in firewall and av are completely disabled). Nothing that can conflict with my connectivity.
My IP is not blocked, or at least it is not blocked if I navigate to Crunchyroll website using my web-browser, or using CefSharp Chromium based browser under a .NET application. I don't know if Crunchyroll / Cloudflare is blocking my IP only when trying to access via youtube-dl (and yt-dlp). But if that is the case, what could I do?. I have a dynamic IP.
With youtube-dl I tried to use exactly the same user agent reported by Google, which is this:
Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:101.0) Gecko/20100101 Firefox/101.0
As you said, I didn't need to use cookies to download those 12 episodes with youtube-dl, but I thought that the usage of cookies could be necessary in order to avoid this error 403 that I started to see after I downloaded many videos and which stopped me from being able to continue downloading any video with youtube-dl.
For sure this will be some kind of blocking because the first time that I tried yt-dlp (when I was in search of a solution for the 403 error that I started to get with youtube-dl), it gave me the same 403 error.
I only want my youtube-dl.exe to work again with Crunchyroll, but the devs could take a lot of useful info from here to improve the application in different ways that I exposed in the comment above.
If you repeatably get 403 with yt-dl and the same page displays successfully in your browser, a HAR archive of the connection in the browser could be useful (which you can generate using the browser's development tools). You may wish to sanitise any personal information if posting it in GH.
I never heard of that term before. Now I learned how to generate the required HAR file:
Use Ctrl-Shift-E to open the Network tab in Firefox, or use Menu > Web Developer > Network to open it. Load the webpage or site that you want to create a HAR file for. Click on the settings icon in the Developer's Tools bar (that looks like a Sun), and select Save all as HAR to export the data.
I just have one question: If I follow the steps as described in the quote (without opening any other webpage during the procedure), the HAR file will only contain data for Crunchyroll website?. Because you advertised me about sanitise it before publishing the file, but I tried to generate the HAR file and I didn't found anything that needs to be "sanitised", I think I'm safe to publish it but its a huge file of 35 megabytes of text lines and maybe I can miss personal information within these lines.
Thanks in advance.
I hope so. Some people care about their IP address. Possibly some site cookies may contain personal data, or username/password may be there if you logged in to the site. Try searching for strings you don't want in the file.
I'll share two HAR files.
First HAR: https://mega.nz/file/HgEhxQrR#yqXp8sCiHQbUqdWNP88b_AtilXw1JWl7QI4vKc9piZU
Second HAR: https://mega.nz/file/ip9HDbpC#1YItKrxskaHL0NL3vqAQKnIlldb9bg5Z3TBWpBINqVs
The first HAR file was generated after clearing all cookies and cache in my web-browser. This is to record in the HAR what happens when I navigate to Crunchyroll website the "first time": Cloudflare asked me to solve a reCaptcha, then I solved it and the Crunchyroll webpage loaded, then I accepted the Crunchyroll cookies and the webpage requested to be reloaded, so I reloaded the page and I waited some seconds until the get queries stopped, and finally I generated the HAR file.
In this first HAR file you can find this at line 167:
"response": {
"status": 403,
"statusText": "Forbidden",
For the second HAR file I just navigated as normally (without clearing cookies) to Crunchyroll website. No reCaptcha requested to be solved. And like before I waited some seconds until the get queries stopped, and finally I generated the HAR file.
This second HAR file does not contain that 403 error, because I didn't cleared the cookies in my web-browser, so Cloudflare didn't spawn at my face to request me to solve a reCaptcha.
It is possibly that when I attempt to use youtube-dl to access Crunchyroll it throws error 403 because Cloudflare is trying to request to solve a reCaptcha and maybe youtube-dl is not aware and not handling this request in any way and it just exit when receiving the 403 error response?.
The cookie file with the --cookies parameter is supposed to be enough to avoid this, I'm right?, because I solved the reCaptcha in my browser from which I generated the cookies file, but clearly the cookies file is not enough for me, and I wonder if the reason for this is because, as you or someone else said before, this blocking occurs before even looking at the cookies?. But in my web-browser it does not occur the blocking.
Please, developers, advise when HAR files are no longer needed, to remove them from MEGA in case they may contain personal information that I could have missed.
Now I can say that the Cloudflare's block which only affects youtube-dl http requests lasted for about 24 hours.
At this point I only can suggest that users should set an interval / sleep when using youtube-dl (and yt-dlp) with Crunchyroll website for consecutive downloads to delay them for one minute or so, because it seems so severe and it will block you for "excessive" downloading with http error 403 for a entire day at least.
I moved on to yt-dlp because it implements useful built-in interval features which as far as I know it seems youtube-dl has not implemented.
I shared a batch-script code example to try to avoid this block in the future using yt-dlp, here.
Thanks for read.
i wa able to download episode by opening the dev tools - network - type mpd in the search box and use that link to download the episodes.
Actually, this issue appears to be a duplicate of #28398.
I still find that the problem video can be downloaded with yt-dl options like this:
$ python -m youtube_dl --ignore-config -f worst -v --test 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479' --user-agent 'Mozilla/5.0'
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--ignore-config', u'-f', u'worst', u'-v', u'--test', u'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479', u'--user-agent', u'Mozilla/5.0']
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Git HEAD: cc179df34
[debug] Python version 2.7.17 (CPython) - Linux-4.4.0-210-generic-i686-with-Ubuntu-16.04-xenial
[debug] exe versions: avconv 4.3, avprobe 4.3, ffmpeg 4.3, ffprobe 4.3
[debug] Proxy map: {}
[crunchyroll] 702479: Downloading webpage
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-esLA information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-itIT information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-ptBR information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-frFR information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-deDE information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-esES information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP-hardsub-enUS information
[crunchyroll] 702479: Downloading adaptive_hls-audio-jaJP information
[crunchyroll] 702479: Downloading media info
WARNING: Unable to download XML: HTTP Error 404: Not Found
[debug] Invoking downloader on u'https://pl.crunchyroll.com/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/index-v1-a1.m3u8?Expires=1655248940&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9wbC5jcnVuY2h5cm9sbC5jb20vZXZzMy82MGU1NDQ1OTI5NzM1ZmVmOGZkZDg3ZDQ1MzQ0ZWI1Zi9hc3NldHMvNXJuZDY5emZlYXpnNDg5XyoubTN1OD8qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjU1MjQ4OTQwfX19XX0_&Signature=IYahKfXwAqcCe~W9kjtu8N5SwHV3b2r-4h1u3LvPs-FQ3dAsdOi-xJOrlggRcURVmbza--1dn9Kv2DCvB26Gb9S94uKlIBX6R9aGv8N~TVzQVLKDaJnDGjV6wgSSxC3VbVTA2UHXw~K6Vz4N8f1CFOvelZ9y2xXnLyjZxDhvuKLgUpOdqfAZCIQEknPUeugodhNUGzgQe1hQhmqgwGC~rViHFlOg0PbDx9SsLRBqAfUyoL1ktaI-tgP1c3JG3YGey0QJzAyZvFimPK~79YrpqmRYSZ6ReWIfw3FoNJkM-qshI-edq22lWCEPBdaiFmAWyv8C4Fkf8ZixV76HAmNy5w__&Key-Pair-Id=APKAJMWSQ5S7ZB3MF5VA&cdn=ll-prod'
[download] Destination: Hundred Episodio 1 – Little Garden-702479.mp4
[debug] ffmpeg command line: ffmpeg -y -loglevel verbose -headers 'Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip, deflate
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
User-Agent: Mozilla/5.0
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Cookie: __cf_bm=BQ6RGRZIAmmvC0Y5B2X1bQSxc5AUc56rTauBQ.EqCaU-1655076135-0-AVdX78s0/1+rUx53PidjQ96nMpIIlvZWKQv7IGamXboQDwjSPQWz72oHU8IbzglC0i449lP5/9wKPnbclx+w1Lqhh4q9qhZ/MpiZR3Osbnr4; c_visitor=8a908eb8-a666-405d-8e57-93a60a462aba; session_id=faec8bb06bfe86bba3ad0e28ac596aa1
' -i 'https://pl.crunchyroll.com/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/index-v1-a1.m3u8?Expires=1655248940&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9wbC5jcnVuY2h5cm9sbC5jb20vZXZzMy82MGU1NDQ1OTI5NzM1ZmVmOGZkZDg3ZDQ1MzQ0ZWI1Zi9hc3NldHMvNXJuZDY5emZlYXpnNDg5XyoubTN1OD8qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjU1MjQ4OTQwfX19XX0_&Signature=IYahKfXwAqcCe~W9kjtu8N5SwHV3b2r-4h1u3LvPs-FQ3dAsdOi-xJOrlggRcURVmbza--1dn9Kv2DCvB26Gb9S94uKlIBX6R9aGv8N~TVzQVLKDaJnDGjV6wgSSxC3VbVTA2UHXw~K6Vz4N8f1CFOvelZ9y2xXnLyjZxDhvuKLgUpOdqfAZCIQEknPUeugodhNUGzgQe1hQhmqgwGC~rViHFlOg0PbDx9SsLRBqAfUyoL1ktaI-tgP1c3JG3YGey0QJzAyZvFimPK~79YrpqmRYSZ6ReWIfw3FoNJkM-qshI-edq22lWCEPBdaiFmAWyv8C4Fkf8ZixV76HAmNy5w__&Key-Pair-Id=APKAJMWSQ5S7ZB3MF5VA&cdn=ll-prod' -c copy -fs 10241 -f mp4 'file:Hundred Episodio 1 – Little Garden-702479.mp4.part'
ffmpeg version 4.3-2ubuntu0~ppa16.04+8 Copyright (c) 2000-2020 the FFmpeg developers
built with gcc 5.4.0 (Ubuntu 5.4.0-6ubuntu1~16.04.12) 20160609
...
libavutil 56. 51.100 / 56. 51.100
libavcodec 58. 91.100 / 58. 91.100
libavformat 58. 45.100 / 58. 45.100
libavdevice 58. 10.100 / 58. 10.100
libavfilter 7. 85.100 / 7. 85.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 7.100 / 5. 7.100
libswresample 3. 7.100 / 3. 7.100
libpostproc 55. 7.100 / 55. 7.100
[tcp @ 0x24d8000] Starting connection attempt to 108.156.28.104 port 443
[tcp @ 0x24d8000] Successfully connected to 108.156.28.104 port 443
[hls @ 0x24d4040] Skip ('#EXT-X-ALLOW-CACHE:YES')
[hls @ 0x24d4040] Skip ('#EXT-X-VERSION:5')
[hls @ 0x24d4040] HLS request for url 'https://ll.v.vrv.co/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/seg-1-v1-a1.ts?t=exp=1655248955~acl=/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/*~hmac=0bda640f4431e5c603dd937a0f859df1bacae670e2574068627d597aa7c8a5b9', offset 0, playlist 0
[hls @ 0x24d4040] Opening 'https://ll.v.vrv.co/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/encryption.key?t=exp=1655248955~acl=/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/*~hmac=0bda640f4431e5c603dd937a0f859df1bacae670e2574068627d597aa7c8a5b9' for reading
[tcp @ 0x2782700] Starting connection attempt to 87.248.214.8 port 443
[tcp @ 0x2782700] Successfully connected to 87.248.214.8 port 443
[AVIOContext @ 0x29f39c0] Statistics: 16 bytes read, 0 seeks
[hls @ 0x24d4040] Opening 'crypto+https://ll.v.vrv.co/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/seg-1-v1-a1.ts?t=exp=1655248955~acl=/evs3/60e5445929735fef8fdd87d45344eb5f/assets/5rnd69zfeazg489_2629667.mp4/*~hmac=0bda640f4431e5c603dd937a0f859df1bacae670e2574068627d597aa7c8a5b9' for reading
[tcp @ 0x2723e00] Starting connection attempt to 87.248.214.8 port 443
[tcp @ 0x2723e00] Successfully connected to 87.248.214.8 port 443
[h264 @ 0x279ea80] Reinit context to 432x240, pix_fmt: yuv420p
Input #0, hls, from '...':
Duration: 00:24:05.90, start: 0.101000, bitrate: 0 kb/s
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame ([27][0][0][0] / 0x001B), yuv420p(left), 428x240 (432x240) [SAR 320:321 DAR 16:9], 23.98 fps, 23.98 tbr, 90k tbn, 47.95 tbc
Metadata:
variant_bitrate : 0
Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 22050 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
Stream #0:2: Data: timed_id3 (ID3 / 0x20334449)
Metadata:
variant_bitrate : 0
Output #0, mp4, to 'file:Hundred Episodio 1 – Little Garden-702479.mp4.part':
Metadata:
encoder : Lavf58.45.100
Stream #0:0: Video: h264 (Constrained Baseline), 1 reference frame (avc1 / 0x31637661), yuv420p(left), 428x240 (0x0) [SAR 320:321 DAR 16:9], q=2-31, 23.98 fps, 23.98 tbr, 90k tbn, 90k tbc
Metadata:
variant_bitrate : 0
Stream #0:1: Audio: aac (LC) (mp4a / 0x6134706D), 22050 Hz, stereo, fltp
Metadata:
variant_bitrate : 0
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
Automatically inserted bitstream filter 'aac_adtstoasc'; args=''
No more output streams to write to, finishing.
Not writing 'clli' atom. No content light level info.
Not writing 'mdcv' atom. Missing mastering metadata.
frame= 6 fps=0.0 q=-1.0 Lsize= 24kB time=00:00:00.20 bitrate= 926.2kbits/s speed= 767x
video:21kB audio:2kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 6.398449%
Input file #0 (...):
Input stream #0:0 (video): 6 packets read (21012 bytes);
Input stream #0:1 (audio): 4 packets read (1681 bytes);
Input stream #0:2 (data): 0 packets read (0 bytes);
Total: 10 packets (22693 bytes) demuxed
Output file #0 (file:Hundred Episodio 1 – Little Garden-702479.mp4.part):
Output stream #0:0 (video): 6 packets muxed (21012 bytes);
Output stream #0:1 (audio): 4 packets muxed (1681 bytes);
Total: 10 packets (22693 bytes) muxed
[AVIOContext @ 0x2a19980] Statistics: 2 seeks, 4 writeouts
[AVIOContext @ 0x29f8e80] Statistics: 98640 bytes read, 0 seeks
[AVIOContext @ 0x271d700] Statistics: 106038 bytes read, 0 seeks
[ffmpeg] Downloaded 24145 bytes
[download] 100% of 23.58KiB in 00:07
$
@dirkf No
$ sudo pip install youtube-dl
Requirement already satisfied: youtube-dl in /usr/local/lib/python3.8/dist-packages (2021.12.17)
$ youtube-dl --ignore-config -f worst -v --test 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479' --user-agent 'Mozilla/5.0'
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['--ignore-config', '-f', 'worst', '-v', '--test', 'https://www.crunchyroll.com/es-es/hundred/episode-1-little-garden-702479', '--user-agent', 'Mozilla/5.0']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] youtube-dl version 2021.12.17
[debug] Python version 3.8.10 (CPython) - Linux-5.4.0-117-generic-x86_64-with-glibc2.29
[debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.7
[debug] Proxy map: {}
[crunchyroll] 702479: Downloading webpage
ERROR: Unable to download webpage: HTTP Error 403: Forbidden (caused by <HTTPError 403: 'Forbidden'>); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
File "/home/allen/.local/lib/python3.8/site-packages/youtube_dl/extractor/common.py", line 634, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/home/allen/.local/lib/python3.8/site-packages/youtube_dl/YoutubeDL.py", line 2288, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 563, in error
result = self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 755, in http_error_302
return self.parent.open(new, timeout=req.timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
$
Probably no help but I succeed with Python 3.5 and 3.9 as well as 2.7.
Maybe your internet access point is being targeted by Cloudflare for some reason: VPN, region, they hate your ISP, they hate you in particular?
Duplicate of #28398 (but note https://github.com/ytdl-org/youtube-dl/issues/30926#issuecomment-1121491193), or stale.
Checklist
Verbose log
Description
I'm not sure whether this could be a bug.
I think this could be related with Cloudflare or reCaptcha for some reason.
Yesterday I used youtube-dl to download without no problems a entire non-premium non-login-required anime season from Crunchyroll website ( this anime: https://www.crunchyroll.com/es-es/hundred ). After that, youtube-dl stopped working when I tried to download more videos, giving me HTTP error 403 when I try to download any video from Crunchyroll...
So I took a research and investigated all related about cookies usage.
I can access Crunchyroll website with no problem under my local web-browser: Firefox DevEdition. But yesterday after I finished downloading all the 12 episodes with youtube-dl, and it stopped working giving me error 403, I noticed that Crunchyroll website asked me to solve a reCaptcha in my local Firefox DevEdition web-browser, but it happened only once.
If I delete all cookies from my Firefox DevEdition browser, and I access Crynchyroll website again, it asks me to resolve a reCapctha only once.
I also noticed that if I use CefSharp web-browser in a .NET Windows Forms application, the first time that I try to access Crunchyroll website it always ask me to resolve a reCaptcha. This happens once every time that I run a new instance of the application. After I solved the reCaptcha it does not ask me anymore while the application instance is running. This never happened to me before I downloaded the 12 episodes with youtube-dl.
In my Firefox DevEdition web-browser I tried to use cookies.txt extension to generate a cookies file from Crunchyroll website. But when I pass these cookies.txt file to youtube-dl it does not solve the problem, I still get error 403.
I tried other parameters like removing cache dir, check no certificate, setting the same user agent as my Firefox web-browser, and username + password.
What can I do to avoid this 403 error and finally let me download videos from Crunchyroll again?.
Cookies.txt file content