Closed I-A-C closed 6 years ago
Oops I deleted my comment. Reposting it:
I want to make a suggestion to merge all source_xxxxx folders into a single "sources" folder, removing all duplicates and keeping the more recent / better working versions of each scraper (I'd imagine it's the ones from yoda and placenta, the latest forks).
The lambdascrapers, placenta, incursion and yoda folders all have a cmovies.py, for example. They are very similar, so only one is needed.
So there'd be a single "sources" folder, inside of it there's the EN, PL, DE etc. language folders with the language providers. Every time someone makes a repair or adds a new provider, it goes in the right folder.
PS: The cmovies.py in the placenta folder scrapes a different website than all other cmovies.py, it should be renamed to cutemovie.py.
I agree with Doko , I'd much rather see a single set of Curated working non duplicate scrapers then keep a bunch of "sets"
I'm sure we could have both but it could stray development on the MEAT , which i'd think is above , One unified , perfect* group of scrapers .
Its your project ! but thats my suggestion
Once all the scrapers have been aggregated and duplicates removed, I totally agree about having a single set. I thought I was almost there until doku brought up cmovie/cutiemovie variation.
I've found the scraper set useful for testing purposes. I have a folder called sources_testing. I put a couple of scrapers that work reliably and the scraper I want to test into that folder. I load up my "testing" module, enable the module providers and start testing.
I think it would be prudent to keep the "free" and "debrid" sets, but in that regard, it may be better to just keep all of them, and allow them to be enabled/disabled the way it stands now, with debrid in their own category of settings. I think it's confusing, and misleading, to have multiple sets of scrapers... and there's been significant controversy from people (outside the project) who don't understand how they work.
Sounds good i-a-c . Exactly what i've been telling people that these exist for Reference and are not the goal of the project .
Some people outside github / reddit seem to think we enable all the sets at once and scrape every site 3 times !
my way of testing is to move my scrapers into a new folder then add the test ones into the normal folder and start trying to watch stuff lol. Should prob add this thing to one of my addons so i can finally try it out and see whats up lmao
Just found this site while playing in scraper hell...
I didnt look at it much or click anything, Just seen that its got pics of the sites listed which is pretty handy in my eyes.
Watch Online without Downloading in 2017 Updated on July 31st, 2017
I'd say thats pretty out of date , but could still be usefull .
fmovies.to - formerly known as Bmovies solarmoviez.ru yesmovies.to
There may already be scrapers for those , have not searched
Pretty sure all 3 are in the scraper folder already and a bmovie one too
My bad just checked my sources and only ymovies has your new urls. Im spending the next few days on scrapers again so i can add em to my list.
We need more cartoon scrapers as well so its time to get some made.
For Cartoon Scrapers you may want to check out some of the other addons . NixToons , and dokus one forgot what its called toon mania ? , i think they are still based kinda of exodus code or have something close ??
@SerpentDrago Also potentially Masterani Redux, but I'm not sure how similar the scrapers are.
I made a new issue thingy for cartoon scraper talk so this one doesnt get highjacked lol.
Looks like toonget.py needs changed to load the 4 sources it has instead of just 1. As for new scrapers I got a list of sites that me and a buddy are gonna look into making new scrapers with.
My lovely evil cat can use my phone and delete entire folders so it looks like i gotta grab a new copy of the scrapers and start all over
Dude that absolutely sucks .
I recently pushed some changes related to scraper module scaling. The code is modular and makes it easier to add new scraper sets without having to change any code.
Now you can simply drop a new sources folder into the lambdascrapers folder and the provider will show up and be usable.
The new sources folder needs to be following this naming convention: sources_SOURCENAME -LANGUAGE --SCRAPER.PY files
jewbmx created a list of verified/fixed scrapers that will use for demonstration. sources_jewbmx -en --2DDL.py --allrls.py
Once you've copied new sources folder, open Lambda Scraper settings and select the new source from the "Choose Module Scraper" option. "Enable All Providers (for current Module) and you should be good to go.