4v3ngR / pluto_tv_scraper

Generate an m3u8 and xml (epg) for pluto tv channels
https://www.npmjs.com/package/plutotv-scraper
MIT License
19 stars 6 forks source link

Issue using same generated playlist on multiple devices at the same time will cause stream stops/errors #11

Closed wazerstar closed 8 months ago

wazerstar commented 8 months ago

I found ClientID,DeviceID,SessionID. I only tried changing ClientID for each playlist which for now resolved the issue?

This was never an issue with the list from i.mjh.nz

How can we fix this, moving forward when auto generating a list that I want to share on multiple devices playing pluto?

4v3ngR commented 8 months ago

the issue is that plutotv uses the client id in their backend. Each client needs a unique id. What I do is generate the stream with a known client id, and then replace it with a random id when it is served from my web server.

I think i.mjh.nz was generating a random clientID when it was doing a 302 redirect to the actual stream url.

I've also done some tests as well and these scenarios worked for me:

What didn't work was same clientID and same (or no) X-Forwarded-For.

It could also be that the i.mjh.nz streams were for an older API, whereas this script uses the latest API and gets the latest streams 🤷‍♂️

Unfortunately, as this requires changes to be made to the stream urls upon request by the player, there's nothing that can be done in the script to generate the playlist.

4v3ngR commented 8 months ago

I thought I would give you an update. Although we cannot have unique client IDs baked into the playlist, and it would be the responsibility of the server that is serving, or the client consuming the playlist to ensure uniqueness, I've created a "server mode" for the script that does just that. When a --port <num> commandline option is given, the script will listen on the provided port and will serve up the playlist and xml file (based on file name). When the playlist is requested, a new clientID is generated based off of the callers IP address, and injected into the stream urls.

I'll also mention the --refresh <seconds> option where the script will act as both the server and the scraper (updating the playlist an xml at the specified interval). The plan is to allow the script to be run as a service in either a docker image, or as a self hosted server and the user can configure their clients to pull the playlist and xml files from the local server.

wazerstar commented 8 months ago

I thought I would give you an update. Although we cannot have unique client IDs baked into the playlist, and it would be the responsibility of the server that is serving, or the client consuming the playlist to ensure uniqueness, I've created a "server mode" for the script that does just that. When a --port <num> commandline option is given, the script will listen on the provided port and will serve up the playlist and xml file (based on file name). When the playlist is requested, a new clientID is generated based off of the callers IP address, and injected into the stream urls.

I'll also mention the --refresh <seconds> option where the script will act as both the server and the scraper (updating the playlist an xml at the specified interval). The plan is to allow the script to be run as a service in either a docker image, or as a self hosted server and the user can configure their clients to pull the playlist and xml files from the local server.

This sounds really neat, great progress, will definitely use this, but also hoping for the epg url where we can host our own path and it will respect it :P