C9Glax / tranga

Docker-Container to monitor (Manga) Scanlation-Sites for download new chapters.
GNU General Public License v3.0
157 stars 15 forks source link

[Enhancement] Periodically Updated Database File for each Connector #71

Closed dityb closed 1 year ago

dityb commented 1 year ago

This is just a suggestion but one thing about FMD2 that I liked (apart from the myriad of websites it could scrape from) was that it would create a database file for each source with all the available manga titles. This allowed for really quick searching, and though FMD2 didn't implement it, it would allow the website to aggregate search results across different providers so you could potentially see which source has the most chapters updated. I realize it would be an undertaking and I'm not familiar at all with C# but I'd be willing to learn to help if it makes sense to do!

C9Glax commented 1 year ago

I used another manual client that also had this feature. The reason I don't do it this way, is that I would only be able to parse titles efficiently, all other information would need to be loaded when searching anyways. Also this way I can just use the websites search-function.

Maybe at some point I will create a database with all titles, covers, etc. aggregated from all the sites. But right now it is simply easier to just load that information at runtime.

If you think search is too slow, you can always just copy&paste the link to one specific manga in the searchbox, that way only information for this one manga is loaded.

dityb commented 1 year ago

Thanks for the explanation! I didn't think search was too slow, the main thing was being able to see which providers have the most updated/recent chapters. For example, when certain mangas get official scanlations, they no longer post to Mangadex and the chapter then appears elsewhere. If there's another way to implement this feature, that's all I'd be really looking for.

On Sat, Oct 21, 2023, 9:51 AM Glax @.***> wrote:

I used another manual client that also had this feature. The reason I don't do it this way, is that I would only be able to parse titles efficiently, all other information would need to be loaded when searching anyways. Also this way I can just use the websites search-function.

Maybe at some point I will create a database with all titles, covers, etc. aggregated from all the sites. But right now it is simply easier to just load that information at runtime.

If you think search is too slow, you can always just copy&paste the link to one specific manga in the searchbox, that way only information for this one manga is loaded.

— Reply to this email directly, view it on GitHub https://github.com/C9Glax/tranga/issues/71#issuecomment-1773797405, or unsubscribe https://github.com/notifications/unsubscribe-auth/AREQY36WMINEQCXXLSNGMUDYAPHMFAVCNFSM6AAAAAA6JCU5MGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZTG44TONBQGU . You are receiving this because you authored the thread.Message ID: @.***>

C9Glax commented 1 year ago

Yea this is what I also have been thinking about: One search for all sites. Would not be that hard to implement. Soon™️

dityb commented 1 year ago

Yeah, correct me if I'm wrong but I was imagining this could take place entirely on the website side (just send requests to loop through all the providers), without having to change the API much. The only thing would be to return the most recent chapter number from the API.

C9Glax commented 1 year ago

I would rather write in C# than Javascript, and it's just loop through the api on the website, or loop through the connectors on the api :) Like you said I would have to include the most recent chapternumber in the return.