Open alphapapa opened 10 years ago
We stopped using the api because there's a limit in the number of videos you can get and private playlists are not supported. I personally think that it would make the code more complicated, but I understand that it's slow.
Hm, I understand. Would it be possible to use the API until the limit is reached? I don't know what the limit is, but from my very limited testing, it seems like it would still be a significant time-saver for most users, most of the time. As it is now, it virtually penalizes all users, all the time. It's so bad that I added playlist caching to my script, but when playlists are updated, it requires waiting minutes for them to be updated again.
Another possibility would be to scrape the HTML from /playlist?list=
, which shouldn't have any limit. It would admittedly be messy and prone to breakage, but the code could fall back on the existing method if it failed, so it would still be a huge win for most cases.
If I run:
youtube-dl --get-title https://www.youtube.com/playlist?list=PL71798B725200FA81
it results in one title at a time being printed. But if I run:
curl http://gdata.youtube.com/feeds/api/playlists/PL71798B725200FA81?alt=json
I can get the entire playlist and all its metadata in one request.
I have a script that uses youtube-dl to download and play YouTube playlists in VLC, but since youtube-dl downloads playlist data one video at a time, it means that the initial download of the playlist metadata with youtube-dl takes a very long time, especially for large playlists (e.g. Yogscast Minecraft video playlists can take minutes just to get the list of titles and video IDs).
It would be good if youtube-dl used this public JSON interface to download the entire playlist metadata in one request. It would save literally minutes of waiting.