ytdl-org / youtube-dl

Command-line program to download videos from YouTube.com and other video sites
http://ytdl-org.github.io/youtube-dl/
The Unlicense
132k stars 10.01k forks source link

[discovery] "GO" Channels have been discontinued #14954

Closed StevenDTX closed 6 years ago

StevenDTX commented 6 years ago

Please follow the guide below


Make sure you are using the latest version: run youtube-dl --version and ensure your version is 2017.12.10. If it's not, read this FAQ entry and update. Issues with outdated version will be rejected.

Before submitting an issue make sure you have:

What is the purpose of your issue?


The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your issue


If the purpose of this issue is a bug report, site support request or you are not completely sure provide the full verbose output as follows:

Add the -v flag to your command line you run youtube-dl with (youtube-dl -v <your command line>), copy the whole output and insert it here. It should look similar to one below (replace it with your log inserted between triple ```):

E:\>youtube-dl https://www.discovery.com/tv-shows/gold-rush/full-episodes/gold-bars-and-hail-marys --verbose
[debug] System config: []
[debug] User config: []
[debug] Custom config: []
[debug] Command-line args: ['https://www.discovery.com/tv-shows/gold-rush/full-episodes/gold-bars-and-hail-marys', '--verbose']
[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252
[debug] youtube-dl version 2017.12.10
[debug] Python version 3.4.4 - Windows-10-10.0.14393
[debug] exe versions: ffmpeg N-89395-g71421f382f, ffprobe N-72383-g7206b94, rtmpdump 2.4
[debug] Proxy map: {}
[Discovery] gold-bars-and-hail-marys: Downloading JSON metadata
ERROR: gold-bars-and-hail-marys: Failed to parse JSON  (caused by ValueError('Expecting value: line 1 column 1 (char 0)',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type  youtube-dl -U  to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
Traceback (most recent call last):
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 686, in _parse_json
  File "C:\Python\Python34\lib\json\__init__.py", line 318, in loads
  File "C:\Python\Python34\lib\json\decoder.py", line 343, in decode
  File "C:\Python\Python34\lib\json\decoder.py", line 361, in raw_decode
ValueError: Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 686, in _parse_json
  File "C:\Python\Python34\lib\json\__init__.py", line 318, in loads
  File "C:\Python\Python34\lib\json\decoder.py", line 343, in decode
  File "C:\Python\Python34\lib\json\decoder.py", line 361, in raw_decode
ValueError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\YoutubeDL.py", line 784, in extract_info
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 437, in extract
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\discovery.py", line 67, in _real_extract
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 680, in _download_json
  File "C:\Users\dst\AppData\Roaming\Build archive\youtube-dl\rg3\tmp9arpqqmf\build\youtube_dl\extractor\common.py", line 690, in _parse_json
youtube_dl.utils.ExtractorError: gold-bars-and-hail-marys: Failed to parse JSON  (caused by ValueError('Expecting value: line 1 column 1 (char 0)',)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type  youtube-dl -U  to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.
...
<end of log>

If the purpose of this issue is a site support request please provide all kinds of example URLs support for which should be included (replace following example URLs by yours):


Description of your issue, suggested solution and other information

All of the Discovery "GO" channels (discoverygo.com, tlcgo.com, animalplanetgo.com, etc) are being discontinued. They have moved all of the Full Episodes to the non-GO channels (discovery.com, tlc.com, animalplanet.com, etc).

The current [discovery] extractor does not work on these sites.

They also have lowered the quality of the videos on the GO channels to 720p. It appears that the 1080p videos are available on the non-GO channels.

Thanks.!

StevenDTX commented 6 years ago

Thanks @remitamine! Free videos are working great.

The *GO channels all redirect to the "regular" sites now.

StevenDTX commented 6 years ago

Would it be possible to use a --ap-mso login instead of cookies for the Discovery sites? I was constantly having issues keeping my cookies up to date.

StevenDTX commented 6 years ago

Can I provide someone with a cookies file to work on getting the non-free episodes?

cookieguru commented 6 years ago

@StevenDTX I have found that the name of the cookie varies, even when logged in. So far I have only seen eosAf and eosAn. This is a doubly URL encoded JSON string. The access token is stored in a JSON key named either access_token or a. Note that multiple permutations can be seen in the same session! I have a working (though not the most efficient) solution that I have in a browser userscript. It iterates over all the browser's cookies and looks for one that starts with a % as it will be the only one with the aforementioned JSON string. From there, I decode that string and iterate over all its keys to find the longest, as that is always the string needed to be sent in the Authorization header.

let cookies = document.cookie.split(';').map(function(x) {
    return x.trim().split(/(=)/);
}).reduce(function(a, b) {
    a[b[0]] = a[b[0]] ? a[b[0]] + ', ' + b.slice(2).join('') :
    b.slice(2).join('');
    return a;
}, {});
let token;
for(let i in cookies) {
    if(cookies[i].substr(0, 1) == '%') {
        let temp = JSON.parse(decodeURIComponent(decodeURIComponent(cookies[i])));
        let longest = 0;
        for(let j in temp) {
            if(temp[j].length > longest) {
                token = temp[j];
                longest = temp[j].length;
            }
        }
        break;
    }
}

From there I fetch the m3u8 link:

fetch('https://api.discovery.com/v1/streaming/video/' + video.id, {
    headers: {
        'authorization': 'Bearer ' + token,
    },
}).then(function(result) {
    return result.json();
}).then(function(json) {
    //json.streamUrl is the episode's master m3u8
});

This method works fine even on the free episodes.

I would say the current method (grabbing an anonymous token on every download) from cb0c2310fbf232e09ae41013be3400034171d6d2 is equally suited to the free videos.

StevenDTX commented 6 years ago

@cookieguru

I apologize, as a lot of what you said if a bit over my head. Is the first section of code the userscript you run? In, I assume, TamperMonkey or something?

cookieguru commented 6 years ago

@StevenDTX Exactly

StevenDTX commented 6 years ago

Thanks a lot @cookieguru !!

I was able to get the script installed and in the Firefox console I get a link that points to https://content-ausc4.uplynk.com/444fe784b93347829dce878e052b952d/i.m3u8. If I expand that, I get the full link with authorization and stuff. I am actually able to download the video, in 1080p!! The free videos are only getting downloaded in 720p.

It's not an automated process, but I only need a few shows a week.

cookieguru commented 6 years ago

@StevenDTX We have the same process ;)

I'm hoping someone more fluent in python can help integrate that methodology in to the existing extractor. AFAIK the extractor as it is will work fine; it just doesn't know how to get to the playlist file containing all the formats.

Allmight3 commented 6 years ago

@cookieguru I understand your first code snippet is a userscript. I put that into a script with @grant none. However, I don't understand what to do with your second code snippet, do you mind expanding?

Pasting that second snippet into the script along with the first snippet causes an execution error. I noticed @StevenDTX mention getting a link in the console, but I see no console output code and pasting the second snippet of code into the console directly yields a similar execution error about video not being defined.

I see there's some sort of working method here but it's just a little out of my grasp of understanding. Hoping you'll be willing to help. I miss being able to grab shows from discovery! I used to use the cookies/directv command but they stopped working a while ago.

cookieguru commented 6 years ago

@Allmight3 The first script iterates over the browser's cookies and extracts the necessary authorization token that is needed to perform a request to Discovery's API to get the link to the video playlist (with multiple formats).

The second script is missing context and wasn't meant to be copy/pastable; rather just an example of how to use the authorization token. Since there's obviously a desire for others to use this before the changes can be worked in to youtube-dl I'll post the full user script here:

// ==UserScript==
// @name         Science Channel Go/Discovery Go
// @namespace    https://github.com/violentmonkey/violentmonkey
// @version      1.0
// @author       https://github.com/cookieguru
// @match        https://www.discovery.com/*
// @match        https://www.sciencechannel.com/*
// @grant        none
// ==/UserScript==

(function() {
    'use strict';

    let video;
    __reactTransmitPacket.layout[window.location.pathname].contentBlocks.forEach((block) => {
        if(block.type === 'video') {
            video = block.content.items[0];
        }
    });

    let cookies = document.cookie.split(';').map(function(x) {
        return x.trim().split(/(=)/);
    }).reduce(function(a, b) {
        a[b[0]] = a[b[0]] ? a[b[0]] + ', ' + b.slice(2).join('') :
        b.slice(2).join('');
        return a;
    }, {});
    let token;
    for(let i in cookies) {
        if(cookies[i].substr(0, 1) == '%') {
            let temp = JSON.parse(decodeURIComponent(decodeURIComponent(cookies[i])));
            let longest = 0;
            for(let j in temp) {
                if(temp[j].length > longest) {
                    token = temp[j];
                    longest = temp[j].length;
                }
            }
            break;
        }
    }

    let style = document.createElement('style');
    style.innerHTML = '#react-tooltip-lite-instace-3, #react-tooltip-lite-instace-4, #react-tooltip-lite-instace-5 { display:none; }';
    document.head.appendChild(style);

    fetch('https://api.discovery.com/v1/streaming/video/' + video.id, {
        headers: {
            'authorization': 'Bearer ' + token,
        },
    }).then(function(result) {
        return result.json();
    }).then(function(json) {
        document.body.innerHTML = "'S" + ('0' + video.season.number).slice(-2) + 'E' + ('0' + video.episodeNumber).slice(-2) + ' ' + video.name.replace(/'/g, '') + "' => '" + json.streamUrl + "',";
    });
})();

Note that this just sends you to an m3u8 which contains links to the other m3u8s of different formats. You'll have to visit the file linked in the browser and figure out which format you want to download. I paste this line in to another (non-browser based) script that does it in batches. The TL;DR of that script is to grab the m3u8 link to the resolution you want and pass that to ffmpeg:

ffmpeg -i "http://www.example.com/1920x1080.m3u8" -acodec copy -bsf:a aac_adtstoasc -vcodec copy "filename.mkv"
Allmight3 commented 6 years ago

@cookieguru Thank you! That worked perfectly and was easy to follow. I have successfully downloaded my show in 1080p. My family will enjoy this. I appreciate your time and effort.

Mr-Jake commented 6 years ago

@cookieguru

Thanks so much for working on this. The userscript you posted works with Greasemonkey and I am able to get the video.

I compiled youtube-dl with your discovery.py commit. But when I try to get a video from the Discovery site I get an error, both for free videos and videos that require a login cookie.

C:\youtube-dl\youtube-dl.exe "https://www.discovery.com/tv-shows/ mythbusters/full-episodes/heads-will-roll" --cookies C:\youtube-dl\cookies.txt -F -v

[Discovery] heads-will-roll: Downloading webpage
Traceback (most recent call last):
  File "__main__.py", line 19, in <module>
  File "youtube_dl\__init__.pyo", line 465, in main
  File "youtube_dl\__init__.pyo", line 455, in _real_main
  File "youtube_dl\YoutubeDL.pyo", line 1988, in download
  File "youtube_dl\YoutubeDL.pyo", line 784, in extract_info
  File "youtube_dl\extractor\common.pyo", line 438, in extract
  File "youtube_dl\extractor\discovery.pyo", line 64, in _real_extract
AttributeError: 'module' object has no attribute 'parse'

I compiled it multiple times to make sure I didn't make a mistake, but still no luck. I will wait for the commit to be commited and perhaps the precompiled youtube-dl will work for me.

Until then I will use the userscript. Thanks again.

cookieguru commented 6 years ago

@Mr-Jake That indicates that something from urllib is missing. I developed against 3.6.4. Which version did you compile against?

Mr-Jake commented 6 years ago

@cookieguru I compiled with 2.7.12.

The reason I use an older version is because I had a conflict getting py2exe working with 3.x. I didn't think it would be an issue since the youtube-dl documentation says 2.6, 2.7, or 3.2+ can be used.

Nii-90 commented 6 years ago

Semi-related question: does youtube-dl generate the requisite json that gets output by the --write-info-json option, or is that json info transmitted as-is by the player interface?

I ask because, while the browser userscript @cookieguru posted works swimmingly to get the m3u8 link, it's obviously missing both the metadata (which can be reconstructed via the page dump, thankfully) and the link to the SCC and XML/TTML subtitles (which can't, unfortunately; those get served by a completely different URL). If the contents of the file output by --write-info-json are transmitted by the website, all the right data is there and the subtitles would still be grabbable with only minimal tweaking to the userscript, right?

cookieguru commented 6 years ago

@Mr-Jake I just pushed a new commit that should work with 2.6+. Could you try it again? It seems to (still) work OK for free videos, but I'm seeing some HTTP 403 errors when I log in with ap-mso.

@Nii-90 The video metadata comes from the page itself, that is, the URL that you pass to youtube-dl to initiate the download. The subtitles come from the stream metadata; IIRC they will be near the top of the m3u8 file of your chosen format.

Nii-90 commented 6 years ago

They aren't. Neither the 6 KB preplay playlist nor the large segment playlist for a particular resolution have the link to the vtt or xml/ttml file (me thinking it was scc was confusing Science Channel for Fox, since they also use Uplynk and I have to use similar script manipulation to restore the chapter marks there too). grep didn't find it, checking it visually in a text editor I couldn't see it, and the fusionddmcdn domain that they come from is not contained in the m3u8. The m3u8 only has the uplynk urls the video data is served from.

Using --write-info-json on one of the free videos on Science Channel, and then parsing the result (the actual URLs redacted here for paranoia):

$ sed 's/", "/",\n"/g' "HTUW - S06E02.info.json" | grep vtt
"subtitles": {"en": [{"url": "[VTT URL]",
"ext": "vtt"}, {"url": "[XML/TTML URL]",

Running a grep for fusionddmcdn (or vtt or ttml) on either the preplay or segment/resolution-specific m3u8 yields nothing.

Mr-Jake commented 6 years ago

@cookieguru Compiled without error with 2.7. Works with free videos.

But when I include --cookies for a login video, I get:

[Discovery] heads-will-roll: Downloading webpage
ERROR: An extractor error has occurred. (caused by KeyError(u'access_token',));
please report this issue on https://yt-dl.org/bug . Make sure you are using the
latest version; type  youtube-dl -U  to update. Be sure to call youtube-dl with
the --verbose flag and include its complete output.
Traceback (most recent call last):
  File "youtube_dl\extractor\common.pyo", line 438, in extract
  File "youtube_dl\extractor\discovery.pyo", line 65, in _real_extract
KeyError: u'access_token'
Traceback (most recent call last):
  File "youtube_dl\YoutubeDL.pyo", line 784, in extract_info
  File "youtube_dl\extractor\common.pyo", line 451, in extract
ExtractorError: An extractor error has occurred. (caused by KeyError(u'access_to
ken',)); please report this issue on https://yt-dl.org/bug . Make sure you are u
sing the latest version; type  youtube-dl -U  to update. Be sure to call youtube
-dl with the --verbose flag and include its complete output.

EDIT: In your commit description I see you mentioned eosAf and eosAn. Not sure exactly what that is, but when I looked at my cookie file I have eosAd and eosAf.

cookieguru commented 6 years ago

@Nii-90 According to discoverygo.py#L69 that's where they come from. I don't use subs so I can't speak to when that last worked. Maybe things have changed since the switchover to Uplynk and/or the switch to Oauth for getting the stream URLs.

If you paste the six lines starting with and including let video in to your browser's console, and then run a line that's just video you will get an object that you can examine for the links to the subs. That object encapsulates everything the webpage knows about the video. I've never known chapter markers to work on videos ripped from Discovery; even back in the Akamai days.


@Mr-Jake The point of the commit was to eliminate the need for --cookies. All the necessary information to get the stream URL is sent with the initial page. eosAf and eosAn are the cookies that contain the authentication token needed to get the stream URLs. I don't think I've ever seen both at the same time though, so I may have to revise my code. Whichever one is longer is going to be the cookie that contains the token. Unlike my userscript, the code I committed checks eosAn first, and if that cookie exists then it won't even bother to check eosAf. But if both are defined and the token is in eosAf, it's going to fail, and that's on me. I'll have to improve that.

Regardless though I can't get this to work on authenticated videos. I think what is happening is that it's not logging in before getting the token.


If anyone can point me in the direction of an extractor that won't even run -F without logging in, that will help. I'll make the changes next time I have some free time.

Nii-90 commented 6 years ago

youtube-dl can get the subs from the free videos, so I think it's just that the authentication is getting in the way. Speaking of, shouldn't discovery.py (or discoverygo.py) be importing the adobepass module to streamline handling the auth stuff? I didn't think ap-mso/ap-username/ap-password parameters would work for a particular site without the extractor for that site using AdobePassIE.

I've never known chapter markers to work on videos ripped from Discovery; even back in the Akamai days.

The chapter marks for Uplynk-based sites don't actually exist in a way that youtube-dl is set up to parse, but they can be re-derived from scratch by parsing the m3u8. Every single time #UPLYNK-SEGMENT or #EXT-X-DISCONTINUITY appears in the resolution-specific m3u8 file, that's a break in the video stream, usually for the insertion of advertisements, which occurs at the same boundaries as the natural chapter segments. I simply whipped up a bash script that automates splitting the big m3u8 apart into child m3u8s, I then download the individual segments using a for loop, and then mkvtoolnix can generate chapters at append boundaries (for speed/size purposes I only append the audio track back together in mkvtoolnix, and then dump the chapter info from it using ffmpeg).

The regular metadata and the chapter info can then be merged into a single ffmetadata file and used when the individual segments get concatenated by ffmpeg (in two steps, as opposed to youtube-dl taking three steps to do the same things*).

*Youtube-DL currently: 1) download and concatenate in one step 2) fix the AAC stream 3) add the metadata

vs.

1) download the segments and fix the AAC streams in each segment at the same time 2) concatenate and add metadata at the same time for the final output.

cookieguru commented 6 years ago

@Nii-90 This is just what I was looking for. I thought youtube-dl would automatically do the login stuff when the various ap switches were passed. If you have some additions to my PR to make this happen, I'm all ears, otherwise I'll look in to it when I have some free time in the next few days.

halolordkiller3 commented 6 years ago

has there been any update on this? I too am passing cookies.txt but it just complains with "you should use --cookies" Thanks

cookieguru commented 6 years ago

@halolordkiller3 Cookies won't work as they aren't used to grab videos any more. #15455 still needs the adobepass module integrated in to it.

halolordkiller3 commented 6 years ago

@cookieguru so just a matter of waiting for the main developer to add this?

cookieguru commented 6 years ago

@halolordkiller3 That's not how open source works

StevenDTX commented 6 years ago

FYI... I am no longer getting 1080p using @cookieguru TamperMonkey script.

cookieguru commented 6 years ago

@StevenDTX The same videos you used to get 1080p or just new videos? They haven't been uploading everything in 1080p in the last few months.

StevenDTX commented 6 years ago

Thats a good question. I will go have a look. Everything I have downloaded using your script has been in 1080p, until this week.

StevenDTX commented 6 years ago

OK...that's weird. I tried downloading last week's Gold Rush, which I already have in 1080p, and only the 720p link came up. I went back to this weeks episode and the 1080p link popped up.

For reference:
h.m3u8 = 720p i.m3u8 = 1080p

dare2 commented 6 years ago

Can anyone help out a non-coder how to get Science/Discovery videos to work with authentication? I'm running version 2018.03.03. I get that --cookies no longer works, but what do I need to do?

Sorry in advance for my helplessness.

cookieguru commented 6 years ago

@dare2 Scroll up

dare2 commented 6 years ago

Sorry, Not seeing anything understandable to me above. I knew how to use --cookies and the --ap-mso methods, but neither work for me now, so what else am I missing?

dare2 commented 6 years ago

Ok, I'm getting that I would need to install tampermonkey or greasemonkey in my browser and do the above steps manually. I really don't want to add another extension to my browser at this point, Hopefully the logic will get incorporated into youtube-dl (which is the purpose of this ticket) at some point.

Nii-90 commented 6 years ago

I could never get the script to work in Firefox with any of the *monkey addons, but it worked fine in Chrome. So either have two web browsers installed so the one used for the script doesn't touch your main one, or maybe set up separate profiles under the same browser so that it's contained away from the rest of your addons.

The basic answer is that no form of authentication currently works for Discovery/Science Channel, except for signing in with your web browser and then using the *monkey script tactic. It cannot use AdobePass (so --ap-mso, etc., does nothing for those sites, and never could before, either) and the --cookies method is broken (not to mention it was inherently unsustainable and downright annoying for end users to try and perform). The only sensible way forward is for AdobePass support to be added into the Discovery extractor so that --ap-mso and friends can finally be used for them.

ghost commented 6 years ago

I need some help. I downloaded Tampermonkey.

Tampermonkey says that this script is enabled, but when I go to investigationdiscovery.com Tampermonkey says no script is running.

I'm not too familiar with coding. I would appreciate any help. I would like to download some investigationdiscovery episodes. I have a subscription. I used to be able to use --cookies but not anymore.

cookieguru commented 6 years ago

@captbanana You need to add another @match line for that domain.

ghost commented 6 years ago

@cookieguru I removed the code. @match?

ghost commented 6 years ago

@cookieguru I'm trying to figure out "If I copied the code correctly into Tampermonkey" how to incorporate Tampermonkey with Youtube-dl?

cookieguru commented 6 years ago

@captbanana Look at the @match line in the userscript. You need to add another one.

I'm trying to figure out ... how to incorporate Tampermonkey with Youtube-dl

It doesn't work like that. Go back and read the post where I explained how to use the userscript.

ghost commented 6 years ago

@cookieguru Thank you. I added @match and everything seems to work. However; when I start downloading I get this message 0.2% of ~291.78MiB at Unknown speed ETA Unknown ETAERROR: unable to download video data: HTTP Error 403: Forbidden

I added --cookies too and I still get the same message.

cookieguru commented 6 years ago

@captbanana As has been mentioned several times in this issue, --cookies.txt is broken. Do not use it. Like I said already, I use ffmpeg to download videos. I can't speak to any other method.

ghost commented 6 years ago

@cookieguru I was looking at your post with about downloading using ffmpeg -i. I have this link.. https://content.uplynk.com/40210e12696c4fa9b40509ede02e6a52.m3u8?

This link.. I can't seem to find where to choose which quality I want to tell ffmpeg which quality to download. In Chrome I right click.. Inspect..? Can you elaborate please?

tindivall commented 6 years ago

@cookieguru Sorry you keep getting bugged with this but you seem to be the one in the know on it all. I am am a newb as far as coding so I am just trying to follow along with the stuff posted above.

I am trying to get season 13 of Deadliest Catch... I downloaded the tampermonkey extenstion, I created a new script and copied your above code into it and saved.

When I visit the episode (after signing in my provider) I get a link:

https://content.uplynk.com/5a175ead08174e059b67f044bbe873bf.m3u8?tc=1&exp=1520970580&rn=497751441&ct=...........2df6d9cd3c06782c953efd0fb413e5e97d3bcf77465d28ff7c72d13f48f7b9f1

I am not sure what to do from there... I have copied the link and tried to youtube-dl (link) and didn't work, I tried to do ffmpeg -i (link) and get error. Obviously I am doing something wrong.

I know you mention something about the m3u8 containing links to other m3u8s, I am not sure how to determine where/what the other m3u8s are in that link I provided ... (or if that matters).

I am just not sure where to go or what to do.

dare2 commented 6 years ago

I finally installed tampermonkey and got it to work, though with a bit of sleuthing...

The script generated a link for me, but you have to be careful to grab just the characters between the single quotation marks. I inadvertently grabbed the last quotation mark and a comma that followed.

tindivall, you paste that link into your browser's address bar and the m3u8 file should automatically download (or you might get prompted to allow the download). Then you open that file with a text editor to get the link to the quality stream you want. That link is the one you would use in ffmpeg.

ghost commented 6 years ago

@dare2 Thank you for explaining. I opened the m3u8 in notepad ( I want to download in 1080p). Here is the link I see... ID=1,RESOLUTION=1920x1080,BANDWIDTH=10712738,CODECS="mp4a.40.5,avc1.640028",FRAME-RATE=30.000,AUDIO="aac",AVERAGE-BANDWIDTH=5213460 https://content-ause1.uplynk.com/40210e12696c4fa9b40509ede02e6a52/i.m3u8

UPLYNK-MEDIA0:192x108x15,baseline-11,2x48000

EXT-X-STREAM-INF:PROGRAM-

When I put (I'm using PowerShell) ./ffmpeg -i https://content-ause1.uplynk.com/40210e12696c4fa9b40509ede02e6a52/i.m3u8

I get the following error message.. ./ffmpeg -i https://content-ause1.uplynk.com/40210e12696c4fa9b40509ede02e6a52/i.m3u8 ffmpeg version N-89832-g07a96b6251 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7.2.0 (GCC) configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth libavutil 56. 7.100 / 56. 7.100 libavcodec 58. 9.100 / 58. 9.100 libavformat 58. 5.100 / 58. 5.100 libavdevice 58. 0.100 / 58. 0.100 libavfilter 7. 11.101 / 7. 11.101 libswscale 5. 0.101 / 5. 0.101 libswresample 3. 0.101 / 3. 0.101 libpostproc 55. 0.100 / 55. 0.100 [hls,applehttp @ 000001deb08a9500] Opening 'https://content-ause1.uplynk.com/check2?b=40210e12696c4fa9b40509ede02e6a52&v=40210e12696c4fa9b40509ede02e6a52&r=i' for reading [https @ 000001deb097c1c0] HTTP error 403 Forbidden Unable to open key file https://content-ause1.uplynk.com/check2?b=40210e12696c4fa9b40509ede02e6a52&v=40210e12696c4fa9b40509ede02e6a52&r=i [hls,applehttp @ 000001deb08a9500] Opening 'crypto+https://stgec-ausw-tmp.uplynk.com/80C078/ausw/slices/402/e6cf0c55dac249f0a0f72e7c72e6f6cb/40210e12696c4fa9b40509ede02e6a52/I00000000.ts?x=0&si=0' for reading [hls,applehttp @ 000001deb08a9500] Opening 'crypto+https://stgec-ausw-tmp.uplynk.com/80C078/ausw/slices/402/e6cf0c55dac249f0a0f72e7c72e6f6cb/40210e12696c4fa9b40509ede02e6a52/I00000001.ts?x=0&si=0' for reading [hls,applehttp @ 000001deb08a9500] Error when loading first segment 'https://stgec-ausw-tmp.uplynk.com/80C078/ausw/slices/402/e6cf0c55dac249f0a0f72e7c72e6f6cb/40210e12696c4fa9b40509ede02e6a52/I00000000.ts?x=0&si=0' https://content-ause1.uplynk.com/40210e12696c4fa9b40509ede02e6a52/i.m3u8: Invalid data found when processing input

tindivall commented 6 years ago

You have to put the full line command like @cookieguru posted above:

ffmpeg -i "https//link" -acodec copy -bsf:a aac_adtstoasc -vcodec copy "filename.mkv"

Where the link is that actual link and filename is wha you want to call the file.... say "Deadliest Catch - s13e01.mp4"

cookieguru commented 6 years ago

Yup. And that's why I have the userscript outputting to the format with the season/episode name. I dump that in to another script that batch downloads episodes.

ghost commented 6 years ago

@tindivall @cookieguru

./ffmpeg -i "https://content-ause1.uplynk.com/40210e12696c4fa9b40509ede02e6a52/i.m3u8" -acodec copy -bsf:a aac_adtstoasc -vcodec copy "seenoevil.mkv" ffmpeg version N-89832-g07a96b6251 Copyright (c) 2000-2018 the FFmpeg developers built with gcc 7.2.0 (GCC) configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-bzlib --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libmfx --enable-amf --enable-cuda --enable-cuvid --enable-d3d11va --enable-nvenc --enable-dxva2 --enable-avisynth libavutil 56. 7.100 / 56. 7.100 libavcodec 58. 9.100 / 58. 9.100 libavformat 58. 5.100 / 58. 5.100 libavdevice 58. 0.100 / 58. 0.100 libavfilter 7. 11.101 / 7. 11.101 libswscale 5. 0.101 / 5. 0.101 libswresample 3. 0.101 / 3. 0.101 libpostproc 55. 0.100 / 55. 0.100 [hls,applehttp @ 00000290504699c0] Opening 'https://content-ause1.uplynk.com/check2?b=40210e12696c4fa9b40509ede02e6a52&v=40210e12696c4fa9b40509ede02e6a52&r=i' for reading [https @ 0000029050c585c0] HTTP error 403 Forbidden Unable to open key file https://content-ause1.uplynk.com/check2?b=40210e12696c4fa9b40509ede02e6a52&v=40210e12696c4fa9b40509ede02e6a52&r=i [hls,applehttp @ 00000290504699c0] Opening 'crypto+https://stgec-ausw-tmp.uplynk.com/80C078/ausw/slices/402/e6cf0c55dac249f0a0f72e7c72e6f6cb/40210e12696c4fa9b40509ede02e6a52/I00000000.ts?x=0&si=0' for reading [hls,applehttp @ 00000290504699c0] Opening 'crypto+https://stgec-ausw-tmp.uplynk.com/80C078/ausw/slices/402/e6cf0c55dac249f0a0f72e7c72e6f6cb/40210e12696c4fa9b40509ede02e6a52/I00000001.ts?x=0&si=0' for reading[hls,applehttp @ 00000290504699c0] Error when loading first segment 'https://stgec-ausw-tmp.uplynk.com/80C078/ausw/slices/402/e6cf0c55dac249f0a0f72e7c72e6f6cb/40210e12696c4fa9b40509ede02e6a52/I00000000.ts?x=0&si=0' https://content-ause1.uplynk.com/40210e12696c4fa9b40509ede02e6a52/i.m3u8: Invalid data found when processing input

cookieguru commented 6 years ago

@captbanana Looks right. The segments are restricted to your IP address so I can't tell what's actually getting downloaded. Which might be what the 403 error is; are you downloading from the same IP you used to get the m3u8 file? Are you attempting the download very shortly after grabbing the m3u8 link?


This issue is all about fixing youtube-dl to work with the non-"GO" versions of the discovery websites, so I'd like to get back on track about fixing youtube-dl and not these one-off workaround scripts.

Where we're currently at: I rewrote youtube-dl to support looking up video URLs from the new endpoint, but the login doesn't happen. The adobepass module needs to be imported and instantiated but I haven't had time to work on this. The userscript support definitely isn't helping me work on this ether.

tindivall commented 6 years ago

@captbanana are you copying the entire link? It looks like you are stopping after the i.m3u8...

It should be the entire link from the beginning of http all the way until the next file is listed