Closed iPromKnight closed 7 months ago
So some kind of p2p database. I wanted to do something like this with Jackett, when you get a query result, its sent to your local cache and a global cache. But that would require some kind of authentication so the global cache isnt spammed with garbage.
@iPromKnight please correct me if my prom -> Emily translation of what you wrote is incorrect:
user on Stremio requests something. we don’t have any sources in the database. new service you are proposing adds this to a queue to be found. send new scraper out to find what is being requested.
So essentially recreating a very small version of Jackett as a fallback for the lack of years of data a user would have from the original Torrentio.
Exactly that Emily ^^ - spot on.
I'm not saying this will be distributed in any way - will still be your own database instance etc, but we'd be able to fallback onto all of the sources other providers use to populate the db without the scraper having to wait until something hits the rss feeds.
Due to the limitations in stremio, i.e. requests are just get requests, with no ability to post back on a webhook when the addon is searching etc, we could take the same approach torrentio does when a file is being downloaded to realdebrid - and show a video file or something stating no sources found, but an attempt to ingest will occur?
Ok thanks for the clarification. I like the fallback approach, but might as well have Stremio display the "addon still loading" until the addon can provide a response.
In that case I'd suggest we have another service with responsibility for this, which the addon can make a call to during the search promise.
It will perform lookups directly with the syncler feeds, return the results back to the addon to be fed to stremio, and also publish them out to be stored in the db.
This way it keeps the addon cleaner responsibility wise. It's purely aggregation and filtering.
All the scrapers do is load a specific page with a set query for things like title, episode, season etc so page loads and scrapes for them should be pretty quick.
We'd have to introduce a couple of extra manifest options thinking about it. A configurable timeout to wait for the results, an option to disable the fallback lookups etc
Emily is handling
I've an idea for a new service, or expansion to the producer service. We'd publish each and every incoming request for data performed against the addon with its incoming imdbId and extracted meta data - title, season, episode + year etc to the broker
These messages would be on a new queue
The producer would listen to this queue, and then utilise the sylcer / wako / weyd scrapers to perform searches for data - which when found would be pushed as ingestions into the consumer queue, essentially expanding the collection everytime someone searches for something. We'd be able to filter the need to perform the augmentation / scraping based on the last updated datetime of an items imdbId in the database etc
Putting something together that can handle the scraping of weyd scrapers for example is pretty easy - I mean helios has what we need in it already: https://github.com/wako-unofficial-addons/helios/blob/master/projects/plugin/src/plugin/queries/torrents/torrents-from-provider-base.query.ts
Syncler providers like JakedUp could be used - like this: express-hybrid.json
With this new service, the collection of cached information would grow organically