Open heyfletch opened 5 months ago
yes i did a custom version, i give him a list of url, it will scrawl all of them and all their /*
but its not only on custom, i changed different part
yes i did a custom version, i give him a list of url, it will scrawl all of them and all their /*
That's great! Any chance you can share an example?
Yes but how to do ? I publish on my GitHub ? Or I have to do a pr ?
Maybe start by pasting your example into a comment here.
https://github.com/julian-passebecq/gpt-crawl-multiurl i put the instructions in readme
I don't understand your comment, "don't use config ts for the url to scrawl, only modify json name in config.ts put the url to crawl in tocrawl.csv". Could you please explain that more? Meaning, which file do I modify, and how do I modify it to crawl "tocrawl.csv"
before, you had to put the url to scrawl in config.ts. now, you only put the name of the json in config.ts, the url written there doesn't matter. yo modify tocrawl, you can do it manually, just respect the 3 columns format i used. me i use allcrawl.csv, i put all my url, then i use crawlee.py, to only input in tocrawl.csv 1 topic of several url to crawl
so to sum up : it will automatically look for urls to crawl in tocrawl.csv, just modify the json name in config.ts u dont have to use allcrawl.csv, crawlee.py and txt.py (to convert the json in txt)
however i didn't fix the missing credentials issue, so if the website requires credentials it doesnt work, at least on some sites. also im not convinced that custom gpt accept well json, so i convert it into txt
EDIT : the tocrawl.csv to use is the one in the src folder
Ok, thanks for the clarification!!
so, did you try my multi url version ? what do u think ?
Haven't tried it yet, but if I do, I'll let you know. :)
Is there a config to crawl multiple websites? For example, example.com and mysite.com?