Closed host505 closed 5 years ago
Cartoonhd.py , https://cartoonhd.care/
I get a Error 502 Bad gateway on that site ?
Might be able to use cartoonhd.co
On 7:46PM, Mon, Oct 22, 2018 SerpentDrago notifications@github.com wrote:
Cartoonhd.py , https://cartoonhd.care/
I get a Error 502 Bad gateway on that site ?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/I-A-C/script.module.lambdascrapers/pull/19#issuecomment-432042296, or mute the thread https://github.com/notifications/unsubscribe-auth/AodhSs4oM7MYUKoqrFp79S_dNcJ8KOsuks5unmbegaJpZM4X0YDW .
cartoonhd.co loads for me , no idea if is compatible with the scraper .
Just took a quick glance at the .co website and the scraper. Looks like its a matchup but needs a few tweaks like the season episode codes with /'s but needs to be -'s. Also that .co site is the only one i can find lol they all went down or pushing a apk
Want me to slap it together? I disabled mine a few days ago and started loving the cartoonwire.py
Cartoonhd.py , https://cartoonhd.care/ I get a Error 502 Bad gateway on that site ?
Wow, they must just have changed it. When I submitted the pr it was working for me.
Is it just me , or are you guys getting MASSIVE duplicates in the results . I'm talking 3/4 sometimes 5 , off almost every source ? I prob need to properly wipe and update things though . i would want verification before i say this for sure ! Useing Host's Master
Using host505 master of Lambdascrapers (that contains this pull request) and fresh updated installs of every other module / addon related to exodus redux in I-A-C's collection of repos . All cache cleared / providers cleared . Using LambdaScraper set , All providers enabled , All Debrid providers Enabled
Tested with the NEW Charmed , S01E02 (but it does this for any show i tried ) https://imgur.com/a/VdQOyHu
All these sources on your pics (darewatch, watchseries, xwatchseries etc) were there before my pr, I didn't put them in. The ones I put (you can see from the commit which ones), I checked before if they already existed as sources on sources_ lambdascrapers folder - yes, I checked through the code too . Edit: having RD & PM, you'll get some links by the same provider 3 times: one free, one resolved by rd, 1 resolved by pm. This doesn't mean that the site gets hit 3 times.
I get lost in these pulls and whatnot at times but keep seeing my name pop up lol and wanted to point out that my name is only in the scrapers i cleaned up to show info for last checked. If ya test one change my info out and add yours. It makes group projects a lil easier. Also watch what ya put in, some are just spare tires incase of a flat.
I tested the scrapers I put in, they were all working at that time. Thanks! Not sure I understand your first part, you're suggesting we "mark" somehow the scrapers when we check them and are working?
Yup pretty much lol. My names not there for any credit its just there as a marker showing when the last status check was. The # history space is also something to update incase you make domain changes.
On 5:31PM, Tue, Oct 23, 2018 host505 notifications@github.com wrote:
I tested the scrapers I put in, they were all working at that time. Not sure I understand your first part, you're suggesting we "mark" somehow the scrapers when we check them and are working?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/I-A-C/script.module.lambdascrapers/pull/19#issuecomment-432445439, or mute the thread https://github.com/notifications/unsubscribe-auth/AodhSpGZzX4V9sjFBas2qaaDezer9GW_ks5un5iogaJpZM4X0YDW .
jewbmx nice info !
Host505 , my bad !
Settings & default.py adjusted accordingly.