ytdl-org / youtube-dl

Command-line program to download videos from YouTube.com and other video sites
http://ytdl-org.github.io/youtube-dl/
The Unlicense
132k stars 10.01k forks source link

[3sat] New URL #21185

Closed w4grfw closed 3 years ago

w4grfw commented 5 years ago
$ youtube-dl --ignore-config --verbose https://www.3sat.de/wissen/nano/nano-21-mai-2019-102.html
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: [u'--ignore-config', u'--verbose', u'https://www.3sat.de/wissen/nano/nano-21-mai-2019-102.html']                                                                                                                   
[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8
[debug] youtube-dl version 2019.05.20
[debug] Python version 2.7.16 (CPython) - Linux-5.0.0-15-generic-x86_64-with-Ubuntu-19.04-disco
[debug] exe versions: ffmpeg 4.1.3, ffprobe 4.1.3, phantomjs 2.1.1, rtmpdump 2.4
[debug] Proxy map: {}
[generic] nano-21-mai-2019-102: Requesting header
WARNING: Falling back on generic information extractor.
[generic] nano-21-mai-2019-102: Downloading webpage
[generic] nano-21-mai-2019-102: Extracting information
ERROR: Unsupported URL: https://www.3sat.de/wissen/nano/nano-21-mai-2019-102.html
Traceback (most recent call last):
  File "/home/marc/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2340, in _real_extract
    doc = compat_etree_fromstring(webpage.encode('utf-8'))
  File "/home/marc/bin/youtube-dl/youtube_dl/compat.py", line 2551, in compat_etree_fromstring
    doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory)))
  File "/home/marc/bin/youtube-dl/youtube_dl/compat.py", line 2540, in _XML
    parser.feed(text)
  File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1659, in feed
    self._raiseerror(v)
  File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1523, in _raiseerror
    raise err
ParseError: not well-formed (invalid token): line 134, column 841
Traceback (most recent call last):
  File "/home/marc/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info
    ie_result = ie.extract(url)
  File "/home/marc/bin/youtube-dl/youtube_dl/extractor/common.py", line 529, in extract
    ie_result = self._real_extract(url)
  File "/home/marc/bin/youtube-dl/youtube_dl/extractor/generic.py", line 3329, in _real_extract
    raise UnsupportedError(url)
UnsupportedError: Unsupported URL: https://www.3sat.de/wissen/nano/nano-21-mai-2019-102.html
andrewglaeser commented 5 years ago

andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl https://www.3sat.de/kultur/theater-und-tanz/tt-persona-100.html [generic] tt-persona-100: Requesting header WARNING: Falling back on generic information extractor. [generic] tt-persona-100: Downloading webpage [generic] tt-persona-100: Extracting information ERROR: Unsupported URL: https://www.3sat.de/kultur/theater-und-tanz/tt-persona-100.html andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl -v https://www.3sat.de/kultur/theater-und-tanz/tt-persona-100.html [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: ['-v', 'https://www.3sat.de/kultur/theater-und-tanz/tt-persona-100.html'] [debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2019.01.17 [debug] Python version 3.5.3 (CPython) - Linux-4.19.0-4-amd64-x86_64-with-debian-9.9 [debug] exe versions: ffmpeg 3.3.9, ffprobe 3.3.9, phantomjs 2.1.1, rtmpdump 2.4 [debug] Proxy map: {} [generic] tt-persona-100: Requesting header WARNING: Falling back on generic information extractor. [generic] tt-persona-100: Downloading webpage [generic] tt-persona-100: Extracting information ERROR: Unsupported URL: https://www.3sat.de/kultur/theater-und-tanz/tt-persona-100.html Traceback (most recent call last): File "/usr/lib/python3/dist-packages/youtube_dl/YoutubeDL.py", line 793, in extract_info ie_result = ie.extract(url) File "/usr/lib/python3/dist-packages/youtube_dl/extractor/common.py", line 508, in extract ie_result = self._real_extract(url) File "/usr/lib/python3/dist-packages/youtube_dl/extractor/generic.py", line 3320, in _real_extract raise UnsupportedError(url) youtube_dl.utils.UnsupportedError: Unsupported URL: https://www.3sat.de/kultur/theater-und-tanz/tt-persona-100.html

andrew@a68n:/mnt/nasd/VIDEO$

websurfer83 commented 5 years ago

I have the same problem. Is there a solution for the new 3sat URL?

w4grfw commented 5 years ago

A workaround on Chrome (should be similar on Firefox):

andrewglaeser commented 5 years ago

OK, did not try yet, but might also wirk with Firefox-Web-Inspector, you get the URL there from the network-tab upun starting to play something. Although figuring this out manually is sub-optimal.

barsnick commented 5 years ago

I noticed this issue two days ago, and fixed it for myself within minutes:

Actually, 3sat is now using the same technology as ZDF. The ZDF extractor successfully works for all the (few) URLs I have tried. The only, very minimal, change it requires, is an update to the URL scheme:

@@ -39,7 +39,7 @@ class ZDFBaseIE(InfoExtractor):

 class ZDFIE(ZDFBaseIE):
-    _VALID_URL = r'https?://www\.zdf\.de/(?:[^/]+/)*(?P<id>[^/?]+)\.html'
+    _VALID_URL = r'https?://www\.(zdf|3sat)\.de/(?:[^/]+/)*(?P<id>[^/?]+)\.html'
     _QUALITIES = ('auto', 'low', 'med', 'high', 'veryhigh')

     _TESTS = [{

This will catch the new 3sat URLs without requiring a separate extractor, making youtube_dl/extractor/dreisat.py and the DreiSat extractor obsolete. (The ZDF extractor could be renamed to ZDF3Sat.)

Everything seems the same as ZDF, like the vast amount of provided formats. The ZDF infrastructure even appears in the web pages' source.

FTFY ;-)

websurfer83 commented 5 years ago

Hello @all: The above solutions are correct. But you can transform the URL so that it is possible to download the video as MP4 file with wget:

  1. For example if I want to download this video: https://www.3sat.de/gesellschaft/makro/mythos-fachkraeftemangel-ganze-sendung-100.html
  2. The m3u8-file looks like this: https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/19/06/190607_fachkraeftemangel_0_makro/1/190607_fachkraeftemangel_0_makro.smil/index_3296000_av.m3u8
  3. Now you can take it and transform it to the MP4 file URL: http://nrodl.zdf.de/none/3sat/19/06/190607_fachkraeftemangel_0_makro/1/190607_fachkraeftemangel_0_makro_3328k_p36v13.mp4 In some cases it doesn't work with "none", so you have to change it to "dach": http://nrodl.zdf.de/dach/3sat/19/06/190607_fachkraeftemangel_0_makro/1/190607_fachkraeftemangel_0_makro_3328k_p36v13.mp4

That's all.

websurfer83 commented 5 years ago

Sorry, but the problem still exists in the newest version 2019-06-21: youtube-dl "https://www.3sat.de/gesellschaft/politik-und-gesellschaft/frauenstreik-102.html" -F [generic] frauenstreik-102: Requesting header WARNING: Falling back on generic information extractor. [generic] frauenstreik-102: Downloading webpage [generic] frauenstreik-102: Extracting information ERROR: Unsupported URL: https://www.3sat.de/gesellschaft/politik-und-gesellschaft/frauenstreik-102.html

matthiasroos commented 5 years ago

Yes, definitely. It's because, sadly, my pull-request is not yet merged into master and therefore not included in the newest version.

tehgarra commented 5 years ago

Yes, definitely. It's because, sadly, my pull-request is not yet merged into master and therefore not included in the newest version.

is there a way to use it with the newest version? there's a fix for another issue that hasn't been added to the newest version yet either

matthiasroos commented 5 years ago

is there a way to use it with the newest version?

No, if it is not merged. The branches do not exist in the original project. The only way would be to clone each of the forks containing the desired fixes and, then, to checkout the specific branch. Then you have a version per fix...

andrewglaeser commented 5 years ago

Yeah, guys, I guess time will tell, interesting discussion.

mortbauer commented 5 years ago

I noticed this issue two days ago, and fixed it for myself within minutes:

Actually, 3sat is now using the same technology as ZDF. The ZDF extractor successfully works for all the (few) URLs I have tried. The only, very minimal, change it requires, is an update to the URL scheme:

@@ -39,7 +39,7 @@ class ZDFBaseIE(InfoExtractor):

 class ZDFIE(ZDFBaseIE):
-    _VALID_URL = r'https?://www\.zdf\.de/(?:[^/]+/)*(?P<id>[^/?]+)\.html'
+    _VALID_URL = r'https?://www\.(zdf|3sat)\.de/(?:[^/]+/)*(?P<id>[^/?]+)\.html'
     _QUALITIES = ('auto', 'low', 'med', 'high', 'veryhigh')

     _TESTS = [{

This will catch the new 3sat URLs without requiring a separate extractor, making youtube_dl/extractor/dreisat.py and the DreiSat extractor obsolete. (The ZDF extractor could be renamed to ZDF3Sat.)

Everything seems the same as ZDF, like the vast amount of provided formats. The ZDF infrastructure even appears in the web pages' source.

FTFY ;-)

Worked fine for me as well!

tehgarra commented 5 years ago

is there a way to use it with the newest version?

No, if it is not merged. The branches do not exist in the original project. The only way would be to clone each of the forks containing the desired fixes and, then, to checkout the specific branch. Then you have a version per fix...

i understand how to clone but checking out and modifying doesn't make sense to me. i thought it was be as simple as changing some code, but that appears to not be the case in this situation

barsnick commented 5 years ago

i understand how to clone but checking out and modifying doesn't make sense to me. i thought it was be as simple as changing some code, but that appears to not be the case in this situation

It is quite that simple. Edit the code in your local copy, and run it with

python -m youtube_dl

as described in the developer instructions. This assumes you do not need to build a single executable, and have a python interpreter available.

Otherwise, getting various changes into one youtube-dl:

git clone https://github.com/ytdl-org/youtube-dl
cd youtube-dl
git checkout -b myprivatebranch
git remote add matthiasroos https://github.com/matthiasroos/youtube-dl/
git fetch matthiasroos
git log matthiasroos/3sat
git cherry-pick 48dde7589175d688ce7661459ca32c535d6500e5 # just an example, not saying this commit is completely correct!
# repeat above for other people's contributed PRs
# additionally edit your own changes, and git add and git commit them, one by one
python -m youtube_dl [...]
tehgarra commented 5 years ago

I see. I was just trying to change the code and run the project in pycharm and got confused. I'll have to give this a try. Thanks!

LinuxOpa commented 5 years ago

Kindergarten oder was? Muss ja schwer sein, das eben einzubauen... Kindergarten or what? Must be hard to integrate...

class ZDFIE(ZDFBaseIE):

andrewglaeser commented 5 years ago

andrew@a68n:~$ sudo curl -L https://yt-dl.org/downloads/latest/youtube-dl -o /usr/local/bin/youtube-dl [sudo] password for andrew: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 3 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 3 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 599 0 599 0 0 488 0 --:--:-- 0:00:01 --:--:-- 2893 100 1724k 100 1724k 0 0 720k 0 0:00:02 0:00:02 --:--:-- 4355k andrew@a68n:~$ sudo chmod a+rx /usr/local/bin/youtube-dl andrew@a68n:~$ suo apt-get remove youtube-dl bash: suo: command not found andrew@a68n:~$ sudo apt-get remove youtube-dl Reading package lists... Done Building dependency tree
Reading state information... Done The following packages were automatically installed and are no longer required: python3-pyxattr rtmpdump Use 'sudo apt autoremove' to remove them. The following packages will be REMOVED: youtube-dl 0 upgraded, 0 newly installed, 1 to remove and 72 not upgraded. After this operation, 5,613 kB disk space will be freed. Do you want to continue? [Y/n] (Reading database ... 462475 files and directories currently installed.) Removing youtube-dl (2019.01.17-1.1~bpo9+1) ... Processing triggers for man-db (2.7.6.1-2) ... andrew@a68n:~$ which youtube-dl /usr/local/bin/youtube-dl andrew@a68n:~$ man youtube-dl No manual entry for youtube-dl See 'man 7 undocumented' for help when manual pages are not available. andrew@a68n:~$ youtube-dl -h Usage: youtube-dl [OPTIONS] URL [URL...]

Options: General Options: -h, --help Print this help text and exit --version Print program version and exit -U, --update Update this program to latest version. Make sure that you have sufficient permissions (run with sudo if needed)
-i, --ignore-errors Continue on download errors, for example to skip unavailable videos in a playlist
--abort-on-error Abort downloading of further videos (in the playlist or the
command line) if an error occurs
--dump-user-agent Display the current browser identification
--list-extractors List all supported extractors
--extractor-descriptions Output descriptions of all supported extractors
--force-generic-extractor Force extraction to use the generic extractor
--default-search PREFIX Use this prefix for unqualified URLs. For example
"gvsearch2:" downloads two videos from google videos for
youtube-dl "large apple". Use the value "auto" to let youtube-dl guess ("auto_warning" to emit a warning when guessing). "error" just throws an error. The default value "fixup_error" repairs broken URLs, but emits an error if this is not possible instead of searching. --ignore-config Do not read configuration files. When given in the global configuration file /etc/youtube-dl.conf: Do not read the user configuration in ~/.config/youtube-dl/config (%APPDATA %/youtube-dl/config.txt on Windows) --config-location PATH Location of the configuration file; either the path to the config or its containing directory. --flat-playlist Do not extract the videos of a playlist, only list them. --mark-watched Mark videos watched (YouTube only) --no-mark-watched Do not mark videos watched (YouTube only) --no-color Do not emit color codes in output

Network Options: --proxy URL Use the specified HTTP/HTTPS/SOCKS proxy. To enable SOCKS proxy, specify a proper scheme. For example socks5://127.0.0.1:1080/. Pass in an empty string (--proxy "") for direct connection --socket-timeout SECONDS Time to wait before giving up, in seconds --source-address IP Client-side IP address to bind to -4, --force-ipv4 Make all connections via IPv4 -6, --force-ipv6 Make all connections via IPv6

Geo Restriction: --geo-verification-proxy URL Use this proxy to verify the IP address for some geo- restricted sites. The default proxy specified by --proxy (or none, if the option is not present) is used for the actual downloading. --geo-bypass Bypass geographic restriction via faking X-Forwarded-For HTTP header --no-geo-bypass Do not bypass geographic restriction via faking X-Forwarded- For HTTP header --geo-bypass-country CODE Force bypass geographic restriction with explicitly provided two-letter ISO 3166-2 country code --geo-bypass-ip-block IP_BLOCK Force bypass geographic restriction with explicitly provided IP block in CIDR notation

Video Selection: --playlist-start NUMBER Playlist video to start at (default is 1) --playlist-end NUMBER Playlist video to end at (default is last) --playlist-items ITEM_SPEC Playlist video items to download. Specify indices of the videos in the playlist separated by commas like: "--playlist- items 1,2,5,8" if you want to download videos indexed 1, 2, 5, 8 in the playlist. You can specify range: "--playlist- items 1-3,7,10-13", it will download the videos at index 1, 2, 3, 7, 10, 11, 12 and 13. --match-title REGEX Download only matching titles (regex or caseless sub-string) --reject-title REGEX Skip download for matching titles (regex or caseless sub- string) --max-downloads NUMBER Abort after downloading NUMBER files --min-filesize SIZE Do not download any videos smaller than SIZE (e.g. 50k or 44.6m) --max-filesize SIZE Do not download any videos larger than SIZE (e.g. 50k or 44.6m) --date DATE Download only videos uploaded in this date --datebefore DATE Download only videos uploaded on or before this date (i.e. inclusive) --dateafter DATE Download only videos uploaded on or after this date (i.e. inclusive) --min-views COUNT Do not download any videos with less than COUNT views --max-views COUNT Do not download any videos with more than COUNT views --match-filter FILTER Generic video filter. Specify any key (see the "OUTPUT TEMPLATE" for a list of available keys) to match if the key is present, !key to check if the key is not present, key > NUMBER (like "comment_count > 12", also works with >=, <, <=, !=, =) to compare against a number, key = 'LITERAL' (like "uploader = 'Mike Smith'", also works with !=) to match against a string literal and & to require multiple matches. Values which are not known are excluded unless you put a question mark (?) after the operator. For example, to only match videos that have been liked more than 100 times and disliked less than 50 times (or the dislike functionality is not available at the given service), but who also have a description, use --match-filter "like_count > 100 & dislike_count <? 50 & description" . --no-playlist Download only the video, if the URL refers to a video and a playlist. --yes-playlist Download the playlist, if the URL refers to a video and a playlist. --age-limit YEARS Download only videos suitable for the given age --download-archive FILE Download only videos not listed in the archive file. Record the IDs of all downloaded videos in it. --include-ads Download advertisements as well (experimental)

Download Options: -r, --limit-rate RATE Maximum download rate in bytes per second (e.g. 50K or 4.2M) -R, --retries RETRIES Number of retries (default is 10), or "infinite". --fragment-retries RETRIES Number of retries for a fragment (default is 10), or "infinite" (DASH, hlsnative and ISM) --skip-unavailable-fragments Skip unavailable fragments (DASH, hlsnative and ISM) --abort-on-unavailable-fragment Abort downloading when some fragment is not available --keep-fragments Keep downloaded fragments on disk after downloading is finished; fragments are erased by default --buffer-size SIZE Size of download buffer (e.g. 1024 or 16K) (default is 1024) --no-resize-buffer Do not automatically adjust the buffer size. By default, the buffer size is automatically resized from an initial value of SIZE. --http-chunk-size SIZE Size of a chunk for chunk-based HTTP downloading (e.g. 10485760 or 10M) (default is disabled). May be useful for bypassing bandwidth throttling imposed by a webserver (experimental) --playlist-reverse Download playlist videos in reverse order --playlist-random Download playlist videos in random order --xattr-set-filesize Set file xattribute ytdl.filesize with expected file size --hls-prefer-native Use the native HLS downloader instead of ffmpeg --hls-prefer-ffmpeg Use ffmpeg instead of the native HLS downloader --hls-use-mpegts Use the mpegts container for HLS videos, allowing to play the video while downloading (some players may not be able to play it) --external-downloader COMMAND Use the specified external downloader. Currently supports aria2c,avconv,axel,curl,ffmpeg,httpie,wget --external-downloader-args ARGS Give these arguments to the external downloader

Filesystem Options: -a, --batch-file FILE File containing URLs to download ('-' for stdin), one URL per line. Lines starting with '#', ';' or ']' are considered as comments and ignored. --id Use only video ID in file name -o, --output TEMPLATE Output filename template, see the "OUTPUT TEMPLATE" for all the info --autonumber-start NUMBER Specify the start value for %(autonumber)s (default is 1) --restrict-filenames Restrict filenames to only ASCII characters, and avoid "&" and spaces in filenames -w, --no-overwrites Do not overwrite files -c, --continue Force resume of partially downloaded files. By default, youtube-dl will resume downloads if possible. --no-continue Do not resume partially downloaded files (restart from beginning) --no-part Do not use .part files - write directly into output file --no-mtime Do not use the Last-modified header to set the file modification time --write-description Write video description to a .description file --write-info-json Write video metadata to a .info.json file --write-annotations Write video annotations to a .annotations.xml file --load-info-json FILE JSON file containing the video information (created with the "--write-info-json" option) --cookies FILE File to read cookies from and dump cookie jar in --cache-dir DIR Location in the filesystem where youtube-dl can store some downloaded information permanently. By default $XDG_CACHE_HOME/youtube-dl or ~/.cache/youtube-dl . At the moment, only YouTube player files (for videos with obfuscated signatures) are cached, but that may change. --no-cache-dir Disable filesystem caching --rm-cache-dir Delete all filesystem cache files

Thumbnail images: --write-thumbnail Write thumbnail image to disk --write-all-thumbnails Write all thumbnail image formats to disk --list-thumbnails Simulate and list all available thumbnail formats

Verbosity / Simulation Options: -q, --quiet Activate quiet mode --no-warnings Ignore warnings -s, --simulate Do not download the video and do not write anything to disk --skip-download Do not download the video -g, --get-url Simulate, quiet but print URL -e, --get-title Simulate, quiet but print title --get-id Simulate, quiet but print id --get-thumbnail Simulate, quiet but print thumbnail URL --get-description Simulate, quiet but print video description --get-duration Simulate, quiet but print video length --get-filename Simulate, quiet but print output filename --get-format Simulate, quiet but print output format -j, --dump-json Simulate, quiet but print JSON information. See the "OUTPUT TEMPLATE" for a description of available keys. -J, --dump-single-json Simulate, quiet but print JSON information for each command- line argument. If the URL refers to a playlist, dump the whole playlist information in a single line. --print-json Be quiet and print the video information as JSON (video is still being downloaded). --newline Output progress bar as new lines --no-progress Do not print progress bar --console-title Display progress in console titlebar -v, --verbose Print various debugging information --dump-pages Print downloaded pages encoded using base64 to debug problems (very verbose) --write-pages Write downloaded intermediary pages to files in the current directory to debug problems --print-traffic Display sent and read HTTP traffic -C, --call-home Contact the youtube-dl server for debugging --no-call-home Do NOT contact the youtube-dl server for debugging

Workarounds: --encoding ENCODING Force the specified encoding (experimental) --no-check-certificate Suppress HTTPS certificate validation --prefer-insecure Use an unencrypted connection to retrieve information about the video. (Currently supported only for YouTube) --user-agent UA Specify a custom user agent --referer URL Specify a custom referer, use if the video access is restricted to one domain --add-header FIELD:VALUE Specify a custom HTTP header and its value, separated by a colon ':'. You can use this option multiple times --bidi-workaround Work around terminals that lack bidirectional text support. Requires bidiv or fribidi executable in PATH --sleep-interval SECONDS Number of seconds to sleep before each download when used alone or a lower bound of a range for randomized sleep before each download (minimum possible number of seconds to sleep) when used along with --max-sleep-interval. --max-sleep-interval SECONDS Upper bound of a range for randomized sleep before each download (maximum possible number of seconds to sleep). Must only be used along with --min-sleep-interval.

Video Format Options: -f, --format FORMAT Video format code, see the "FORMAT SELECTION" for all the info --all-formats Download all available video formats --prefer-free-formats Prefer free video formats unless a specific one is requested -F, --list-formats List all available formats of requested videos --youtube-skip-dash-manifest Do not download the DASH manifests and related data on YouTube videos --merge-output-format FORMAT If a merge is required (e.g. bestvideo+bestaudio), output to given container format. One of mkv, mp4, ogg, webm, flv. Ignored if no merge is required

Subtitle Options: --write-sub Write subtitle file --write-auto-sub Write automatically generated subtitle file (YouTube only) --all-subs Download all the available subtitles of the video --list-subs List all available subtitles for the video --sub-format FORMAT Subtitle format, accepts formats preference, for example: "srt" or "ass/srt/best" --sub-lang LANGS Languages of the subtitles to download (optional) separated by commas, use --list-subs for available language tags

Authentication Options: -u, --username USERNAME Login with this account ID -p, --password PASSWORD Account password. If this option is left out, youtube-dl will ask interactively. -2, --twofactor TWOFACTOR Two-factor authentication code -n, --netrc Use .netrc authentication data --video-password PASSWORD Video password (vimeo, smotri, youku)

Adobe Pass Options: --ap-mso MSO Adobe Pass multiple-system operator (TV provider) identifier, use --ap-list-mso for a list of available MSOs --ap-username USERNAME Multiple-system operator account login --ap-password PASSWORD Multiple-system operator account password. If this option is left out, youtube-dl will ask interactively. --ap-list-mso List all supported multiple-system operators

Post-processing Options: -x, --extract-audio Convert video files to audio-only files (requires ffmpeg or avconv and ffprobe or avprobe) --audio-format FORMAT Specify audio format: "best", "aac", "flac", "mp3", "m4a", "opus", "vorbis", or "wav"; "best" by default; No effect without -x --audio-quality QUALITY Specify ffmpeg/avconv audio quality, insert a value between 0 (better) and 9 (worse) for VBR or a specific bitrate like 128K (default 5) --recode-video FORMAT Encode the video to another format if necessary (currently supported: mp4|flv|ogg|webm|mkv|avi) --postprocessor-args ARGS Give these arguments to the postprocessor -k, --keep-video Keep the video file on disk after the post-processing; the video is erased by default --no-post-overwrites Do not overwrite post-processed files; the post-processed files are overwritten by default --embed-subs Embed subtitles in the video (only for mp4, webm and mkv videos) --embed-thumbnail Embed thumbnail in the audio as cover art --add-metadata Write metadata to the video file --metadata-from-title FORMAT Parse additional metadata like song title / artist from the video title. The format syntax is the same as --output. Regular expression with named capture groups may also be used. The parsed parameters replace existing values. Example: --metadata-from-title "%(artist)s - %(title)s" matches a title like "Coldplay - Paradise". Example (regex): --metadata-from-title "(?P.+?) - (?P.+)" --xattrs Write metadata to the video file's xattrs (using dublin core and xdg standards) --fixup POLICY Automatically correct known faults of the file. One of never (do nothing), warn (only emit a warning), detect_or_warn (the default; fix file if we can, warn otherwise) --prefer-avconv Prefer avconv over ffmpeg for running the postprocessors --prefer-ffmpeg Prefer ffmpeg over avconv for running the postprocessors (default) --ffmpeg-location PATH Location of the ffmpeg/avconv binary; either the path to the binary or its containing directory. --exec CMD Execute a command on the file after downloading, similar to find's -exec syntax. Example: --exec 'adb push {} /sdcard/Music/ && rm {}' --convert-subs FORMAT Convert the subtitles to other format (currently supported: srt|ass|vtt|lrc) andrew@a68n:~$ cd /mnt/nasd/VIDEO/ andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a> [generic] abenteuer-polarkreis-100: Requesting header WARNING: Falling back on generic information extractor. [generic] abenteuer-polarkreis-100: Downloading webpage [generic] abenteuer-polarkreis-100: Extracting information ERROR: Unsupported URL: <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a> andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl <a href="https://www.3sat.de/film/spielfilm/das-finstere-tal-100.html">https://www.3sat.de/film/spielfilm/das-finstere-tal-100.html</a> [generic] das-finstere-tal-100: Requesting header WARNING: Falling back on generic information extractor. [generic] das-finstere-tal-100: Downloading webpage [generic] das-finstere-tal-100: Extracting information ERROR: Unsupported URL: <a href="https://www.3sat.de/film/spielfilm/das-finstere-tal-100.html">https://www.3sat.de/film/spielfilm/das-finstere-tal-100.html</a> andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a> [generic] abenteuer-polarkreis-100: Requesting header WARNING: Falling back on generic information extractor. [generic] abenteuer-polarkreis-100: Downloading webpage [generic] abenteuer-polarkreis-100: Extracting information ERROR: Unsupported URL: <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a> andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl -v [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2019.08.13 [debug] Python version 2.7.13 (CPython) - Linux-4.19.0-0.bpo.5-amd64-x86_64-with-debian-9.9 [debug] exe versions: ffmpeg 3.3.9, ffprobe 3.3.9, phantomjs 2.1.1, rtmpdump 2.4 [debug] Proxy map: {} Usage: youtube-dl [OPTIONS] URL [URL...]</p> <p>youtube-dl: error: You must provide at least one URL. Type youtube-dl --help to see a list of all options. andrew@a68n:/mnt/nasd/VIDEO$ youtube-dl -v <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a> [debug] System config: [] [debug] User config: [] [debug] Custom config: [] [debug] Command-line args: [u'-v', u'<a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a>'] [debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8 [debug] youtube-dl version 2019.08.13 [debug] Python version 2.7.13 (CPython) - Linux-4.19.0-0.bpo.5-amd64-x86_64-with-debian-9.9 [debug] exe versions: ffmpeg 3.3.9, ffprobe 3.3.9, phantomjs 2.1.1, rtmpdump 2.4 [debug] Proxy map: {} [generic] abenteuer-polarkreis-100: Requesting header WARNING: Falling back on generic information extractor. [generic] abenteuer-polarkreis-100: Downloading webpage [generic] abenteuer-polarkreis-100: Extracting information ERROR: Unsupported URL: <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a> Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 2357, in _real_extract doc = compat_etree_fromstring(webpage.encode('utf-8')) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2551, in compat_etree_fromstring doc = _XML(text, parser=etree.XMLParser(target=_TreeBuilder(element_factory=_element_factory))) File "/usr/local/bin/youtube-dl/youtube_dl/compat.py", line 2540, in _XML parser.feed(text) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1653, in feed self._raiseerror(v) File "/usr/lib/python2.7/xml/etree/ElementTree.py", line 1517, in _raiseerror raise err ParseError: not well-formed (invalid token): line 134, column 831 Traceback (most recent call last): File "/usr/local/bin/youtube-dl/youtube_dl/YoutubeDL.py", line 796, in extract_info ie_result = ie.extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/common.py", line 530, in extract ie_result = self._real_extract(url) File "/usr/local/bin/youtube-dl/youtube_dl/extractor/generic.py", line 3333, in _real_extract raise UnsupportedError(url) UnsupportedError: Unsupported URL: <a href="https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html">https://www.3sat.de/wissen/terra-x/abenteuer-polarkreis-100.html</a></p> <p>andrew@a68n:/mnt/nasd/VIDEO$ </p> <p>Fine, if you have a solution already, but unfortunately I cannot confirm, it is workable with the latest youtube-dl version. Lately I found, that the situation is now the same with ARTE.tv: Search for 'master.m3u' with firefox web-inspector on the network-tab, then download manually, i.e. yt-dl will get you the .mp4-file without the segmentations.</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/FliegendeWurst"><img src="https://avatars.githubusercontent.com/u/12560461?v=4" />FliegendeWurst</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <blockquote> <p>Kindergarten oder was? Muss ja schwer sein, das eben einzubauen... Kindergarten or what? Must be hard to integrate...</p> <p>class ZDFIE(ZDFBaseIE):</p> <pre><code>* _VALID_URL = r'https?://www.zdf.de/(?:[^/]+/)*(?P[^/?]+).html' * _VALID_URL = r'https?://www.(zdf|3sat).de/(?:[^/]+/)*(?P[^/?]+).html</code></pre> </blockquote> <p>@LinuxOpa: wenn's so einfach ist, warum machst du dann nicht einen PR? Die Maintainer sind mit reviews schon beschäftigt genug.</p> <p>[English translation: I opened a PR for you since you apparently didn't manage to do it yourself]</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/andrewglaeser"><img src="https://avatars.githubusercontent.com/u/34741555?v=4" />andrewglaeser</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <p>Here is a practical example for you guys, be positive, last chance!: youtube-dl -o 190728_polarfieber_dokreise.mp4 <a href="https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/19/07/190728_polarfieber_dokreise/2/190728_polarfieber_dokreise.smil/master.m3u8">https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/19/07/190728_polarfieber_dokreise/2/190728_polarfieber_dokreise.smil/master.m3u8</a> youtube-dl -o 151022_geheime_kontinent1_online.mp4 <a href="https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/10/151022_geheime_kontinent1_online/21/151022_geheime_kontinent1_online.smil/master.m3u8">https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/10/151022_geheime_kontinent1_online/21/151022_geheime_kontinent1_online.smil/master.m3u8</a> youtube-dl -o 151022_geheime_kontinent2_online.mp4 <a href="https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/10/151022_geheime_kontinent2_online/21/151022_geheime_kontinent2_online.smil/master.m3u8">https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/10/151022_geheime_kontinent2_online/21/151022_geheime_kontinent2_online.smil/master.m3u8</a> youtube-dl -o 150706_universum_ozeane1_online.mp4 <a href="https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/07/150706_universum_ozeane1_online/9/150706_universum_ozeane1_online.smil/master.m3u8">https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/07/150706_universum_ozeane1_online/9/150706_universum_ozeane1_online.smil/master.m3u8</a> youtube-dl -o 150706_universum_ozeane2_online.mp4 <a href="https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/07/150706_universum_ozeane2_online/9/150706_universum_ozeane2_online.smil/master.m3u8">https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/07/150706_universum_ozeane2_online/9/150706_universum_ozeane2_online.smil/master.m3u8</a> youtube-dl -o 150706_universum_ozeane3_online.mp4 <a href="https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/07/150706_universum_ozeane3_online/9/150706_universum_ozeane3_online.smil/master.m3u8">https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/15/07/150706_universum_ozeane3_online/9/150706_universum_ozeane3_online.smil/master.m3u8</a> youtube-dl -o 180902_abenteuer_suedsee_online.mp4 <a href="https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/18/09/180902_abenteuer_suedsee_online/4/180902_abenteuer_suedsee_online.smil/master.m3u8">https://zdfvoddach-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/18/09/180902_abenteuer_suedsee_online/4/180902_abenteuer_suedsee_online.smil/master.m3u8</a></p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/Wikinaut"><img src="https://avatars.githubusercontent.com/u/1151915?v=4" />Wikinaut</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <p>Not working on <a href="https://www.3sat.de/migration/3sat/robert-frank-don-t-blink-100.html">https://www.3sat.de/migration/3sat/robert-frank-don-t-blink-100.html</a> .</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/barsnick"><img src="https://avatars.githubusercontent.com/u/7283222?v=4" />barsnick</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <blockquote> <p>Not working on <a href="https://www.3sat.de/migration/3sat/robert-frank-don-t-blink-100.html">https://www.3sat.de/migration/3sat/robert-frank-don-t-blink-100.html</a> .</p> </blockquote> <p>The simple URL regex based solution works perfectly for this.</p> <p>Now, why has it not been merged? I need to check up on what has changed recently, but last I checked:</p> <ul> <li>The Phoenix extractor depends on DreiSat, and is therefore also broken, but is not fixed by this change.</li> <li>No test cases where added / modified in the PR. (Updating tests is a bit of a PITA, I personally think it needs better instructions in the contributions document.)</li> <li>It's unclear to me whether a rename of the ZDF extractor to ZDFDreiSat is acceptable or required, and what to do about the Phoenix</li> <li>Possibly other stuff.</li> </ul> <p>I can give another stab at a PR, as others have done, but don't know what to do about some of thise points. And why haven't I? Because my local fix works just fine! :-P</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/Wikinaut"><img src="https://avatars.githubusercontent.com/u/1151915?v=4" />Wikinaut</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <p>I confirm that the url <a href="https://www.3sat.de/migration/3sat/robert-frank-don-t-blink-100.html">https://www.3sat.de/migration/3sat/robert-frank-don-t-blink-100.html</a> does work with <a href="https://github.com/FliegendeWurst/youtube-dl">https://github.com/FliegendeWurst/youtube-dl</a> branch 3sat-zdf-merger-bugfix-feature se <a href="https://github.com/ytdl-org/youtube-dl/pull/22191">https://github.com/ytdl-org/youtube-dl/pull/22191</a> .</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/barsnick"><img src="https://avatars.githubusercontent.com/u/7283222?v=4" />barsnick</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <blockquote> <p>Now, why has it not been merged? I need to check up on what has changed recently, but last I checked:</p> <ul> <li>The Phoenix extractor depends on DreiSat, and is therefore also broken, but is not fixed by this change.</li> <li>No test cases where added / modified in the PR. (Updating tests is a bit of a PITA, I personally think it needs better instructions in the contributions document.)</li> <li>It's unclear to me whether a rename of the ZDF extractor to ZDFDreiSat is acceptable or required, and what to do about the Phoenix</li> </ul> </blockquote> <p>After checking up on existing PRs, I refine my opinion on this. Some of the linked PRs are just fine:</p> <ul> <li>They don't merge "3sat" into "zdf", but rather create a new DreiSatIE depending on ZDFIE (as requested by the project maintainers), with its own URL regex and own set of testcases,</li> <li>They add testcases for the new site layout. (I assume these test pass.)</li> <li>They leave PhoenixIE untouched, so this extractor will probably fail at runtime, as it currently does anyway.</li> </ul> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/andrewglaeser"><img src="https://avatars.githubusercontent.com/u/34741555?v=4" />andrewglaeser</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <p>Current debian release buster features a fully workable version of mediathekview:</p> <p>andrew@a68n:~$ aptitude show mediathekview Package: mediathekview<br /> Version: 13.2.1-3 New: yes State: installed Automatically installed: no Priority: optional Section: video Maintainer: Markus Koschany <a href="mailto:apo@debian.org">apo@debian.org</a> Architecture: all Uncompressed Size: 1,861 k Depends: default-jre (>= 2:1.9) | java9-runtime, java-wrappers (>= 0.3), libjide-oss-java (>= 3.7.4), libopenjfx-java (>= 11), libcommons-compress-java, libcommons-configuration2-java, libcommons-dbcp2-java, libcommons-lang3-java, libcommons-pool2-java, libcontrolsfx-java, libguava-java, libh2-java, libjackson2-core-java, libjchart2d-java, libjiconfont-font-awesome-java, libjiconfont-java, libjiconfont-swing-java, liblog4j2-java, libmbassador-java, libmiglayout-java, libokhttp-java, libswingx-java, libxz-java Recommends: flvstreamer, vlc | mpv | mplayer Suggests: ffmpeg Description: view streams from German public television stations This application searches for various media center video content of the German television program (ARD, ZDF, Arte, 3Sat, MDR, ORF, SRF and many more). You can watch, download and even subscribe to an offered show. Homepage: <a href="https://mediathekview.de/">https://mediathekview.de/</a> Tags: culture::german, implemented-in::java, interface::graphical, interface::x11, role::program, uitoolkit::xlib, use::downloading, use::entertaining, use::playing, works-with::audio, x11::application</p> <p>So most people would probably rather use this than youtube-dl, but of course it can still be interesting to visit websites of TV-channels directly using a web-browser in order to find contents of interest.</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/Sevy007"><img src="https://avatars.githubusercontent.com/u/13178236?v=4" />Sevy007</a> commented <strong> 5 years ago</strong> </div> <div class="markdown-body"> <p>Those who are interested in that can simple open <a href="https://mediathekviewweb.de">https://mediathekviewweb.de</a> in a browser on any kind of operating system and use this service there. But I guess this discussion is off-topic here. I consider it just as a workaround while youtube-dl is not working.</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/grexe"><img src="https://avatars.githubusercontent.com/u/404952?v=4" />grexe</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>sad to see this issue still open as of 20/02/2020 (what a date:) using youtube-dl 2020.01.24 What can be done to fix this? Is the latest PR not acceptable?</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/larsschwegmann"><img src="https://avatars.githubusercontent.com/u/1457934?v=4" />larsschwegmann</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>Any updates on this?</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/FliegendeWurst"><img src="https://avatars.githubusercontent.com/u/12560461?v=4" />FliegendeWurst</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>@larsschwegmann my fix (PR #22191) still works.</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/LinuxOpa"><img src="https://avatars.githubusercontent.com/u/41851582?v=4" />LinuxOpa</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>works, but it is still not built into youtube-dl, so i use 2 versions, the always current one and a 'youtube-dl-3sat'. this is strange, why the build in is denied, can someone explain why?</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/annomatik"><img src="https://avatars.githubusercontent.com/u/28311452?v=4" />annomatik</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>Hey, happy birthday, issue! :-) We're almost at the one year birthday. Still doesn't work.</p> <p>Sure, I can find the master.m3u8 manually in Chrome with the dev console, but that's besides the point, right? Please fix. Or maybe there could be an generic override, like "use ZDF extractor even though the URL is a different one". I don't know.</p> <p>Thanks!</p> <p>$ youtube-dl --version 2020.05.08 $ youtube-dl <a href="https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html">https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html</a> [generic] schatzkammer-regenwald-100: Requesting header WARNING: Falling back on generic information extractor. [generic] schatzkammer-regenwald-100: Downloading webpage [generic] schatzkammer-regenwald-100: Extracting information WARNING: [generic] schatzkammer-regenwald-100: Failed to parse JSON Expecting ',' delimiter: line 23 column 26 (char 580) ERROR: Unsupported URL: <a href="https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html">https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html</a></p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/annomatik"><img src="https://avatars.githubusercontent.com/u/28311452?v=4" />annomatik</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p><em>DISCLAIMER</em> this is an ugly hack. But it works for me. Feel free to improve it :-)</p> <p>Ok, not sure if it helps anyone. But I made a hacky workaround for downloading 3sat stuff with youtube-dl. The URL to the master.m3u8 has a clear system:</p> <p><a href="https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/19/09/190926_sendung_wido/1/190926_sendung_wido.smil/master.m3u8">https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/19/09/190926_sendung_wido/1/190926_sendung_wido.smil/master.m3u8</a></p> <p><a href="https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/{DAY}/{MONTH}/{YYMMDD}_sendung_wido/1/{YYMMDD}_sendung_wido.smil/master.m3u8">https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/{DAY}/{MONTH}/{YYMMDD}_sendung_wido/1/{YYMMDD}_sendung_wido.smil/master.m3u8</a></p> <p>(and obviously sendung_wido; 300 might be some sort of bandwidth indicator, "1" might be the "position", but I'm ignoring that for now :-))</p> <p>the upload Date is used for some template fields and can be found like this:</p> <p>$ grep uploadDate schatzkammer-regenwald-100.html "uploadDate": "2019-09-26T18:15:00.000Z",</p> <p>Now, generating a master.m3u8 is quite easy (not tested very much, but works for my purposes):</p> <p>function get_3sat_master() {</p> <pre><code> # $1 = link to video landing page, e.g. https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html datum=$(wget -O - $1 2>/dev/null | grep uploadDate | cut -d '"' -f4 | cut -d 'T' -f1) year=$(echo $datum | cut -d '-' -f1) month=$(echo $datum | cut -d '-' -f2) day=$(echo $datum | cut -d '-' -f3) year_2=$(echo $year | cut -c 3-) echo https://zdfvodnone-vh.akamaihd.net/i/meta-files/3sat/smil/m3u8/300/${year_2}/$month/${year_2}${month}${day}_sendung_wido/1/${year_2}${month}${day}_sendung_wido.smil/master.m3u8</code></pre> <p>}</p> <p>And ey presto, you can download it now:</p> <p>youtube-dl -o 'Schatzkammer Regenwald.mp4' $(get_3sat_master <a href="https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html">https://www.3sat.de/wissen/wissenschaftsdoku/schatzkammer-regenwald-100.html</a>)</p> <p>Still hoping for a "proper" fix though.</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/FliegendeWurst"><img src="https://avatars.githubusercontent.com/u/12560461?v=4" />FliegendeWurst</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>@annomatik my patch (#22191) still works. Only @dstftw can tell you when the fix will be merged (if ever).</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/grexe"><img src="https://avatars.githubusercontent.com/u/404952?v=4" />grexe</a> commented <strong> 4 years ago</strong> </div> <div class="markdown-body"> <p>This is still bugging me (in every sense of the word) with every new install of YouTube-dl where I didn't manually fix this with @FliegendeWurst 's patch. From what I see in the PR, there were only minor gripes with naming and a test case, maybe this could be added by @FliegendeWurst together with @dstftw or another maintainer so it can get merged, pretty please?</p> <p>Update 03. August 2020: still broken for 3sat, and kudos to @barsnick who mentioned this first.</p> </div> </div> <div class="comment"> <div class="user"> <a rel="noreferrer nofollow" target="_blank" href="https://github.com/FichteFoll"><img src="https://avatars.githubusercontent.com/u/931051?v=4" />FichteFoll</a> commented <strong> 3 years ago</strong> </div> <div class="markdown-body"> <p>Current PR: #27068</p> </div> </div> <div class="page-bar-simple"> </div> <div class="footer"> <ul class="body"> <li>© <script> document.write(new Date().getFullYear()) </script> Githubissues.</li> <li>Githubissues is a development platform for aggregating issues.</li> </ul> </div> <script src="https://cdn.jsdelivr.net/npm/jquery@3.5.1/dist/jquery.min.js"></script> <script src="/githubissues/assets/js.js"></script> <script src="/githubissues/assets/markdown.js"></script> <script src="https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@11.4.0/build/highlight.min.js"></script> <script src="https://cdn.jsdelivr.net/gh/highlightjs/cdn-release@11.4.0/build/languages/go.min.js"></script> <script> hljs.highlightAll(); </script> </body> </html>