Closed BobbyWibowo closed 1 month ago
There is a detection for matching url and finding manga pages. But the WordPress plugins are more problematic, I cant possibly list all sites, new ones popup all the time.
Madara urls are bit more standard for now so it is less generic /https?:\/\/.+\/(manga|series|manhua|comic|ch|novel|webtoon)\/.+\/.+/
MangaStream however does not enforce a standard url, so there are many patterns it must match. I could narrow it for sure, but then I might not match unlisted sites that use this WordPress plugin.
I'm open for suggestions, below some url examples:
realmoasis is the only remaing true outlier now, that I know of, but historically there were more.
the current candidate for better regex is /https?:\/\/[^\/]*(scans|comic|realmoasis|hivetoon)[^\/]*\/.+/
Hmm, personally I think it's best to just guide users to their respective userscript manager's menu to manually include domain of uncaught WordPress-based sites. At least in addition to using the candidate replacement you mentioned.
I'm not sure about other managers, but in Tampermonkey, it appears fairly trivial for users to include more domains without editing the script itself, And since it does not edit the source code, auto-update will continue working as usual.
Unfortunately, there's no ability to disable a specific @include
tag, except turning off "Keep original" and copy-pasting everything other than the aforementioned tag. But doing so will also make me miss out on any new domains unless I manually edit them again whenever that happens.
I'd also imagine that the act of using userscripts by itself is already a slightly-advanced thing to do, so the average users probably won't find manually adding more domains too complicated?
Its should be fine with the new regex, for now. Be careful to not underestimate how little people know/understand, I'm reminded of it every day at work, in a tech company.
Hi there, just wondering, was it intentional to add this certain
@include
tag?At least in Tampermonkey on Firefox, it appears to be matching to literally all URLs, causing the script to always run. It's basically because the
(chapter\/)?
section has a?
token, thus making it an optional match, and passes all http/s URLs.The script itself does seemingly have its own detection to check if it can run on the sites, but it feels kinda "eh..." knowing it'll always run on pretty much anything.