BuilderIO / gpt-crawler

Crawl a site to generate knowledge files to create your own custom GPT from a URL
https://www.builder.io/blog/custom-gpt
ISC License
18.14k stars 1.88k forks source link

Multiple websites at once? #135

Open heyfletch opened 5 months ago

heyfletch commented 5 months ago

Is there a config to crawl multiple websites? For example, example.com and mysite.com?

julian-passebecq commented 4 months ago

yes i did a custom version, i give him a list of url, it will scrawl all of them and all their /*

julian-passebecq commented 4 months ago

but its not only on custom, i changed different part

heyfletch commented 4 months ago

yes i did a custom version, i give him a list of url, it will scrawl all of them and all their /*

That's great! Any chance you can share an example?

julian-passebecq commented 4 months ago

Yes but how to do ? I publish on my GitHub ? Or I have to do a pr ?

heyfletch commented 4 months ago

Maybe start by pasting your example into a comment here.

julian-passebecq commented 4 months ago

https://github.com/julian-passebecq/gpt-crawl-multiurl i put the instructions in readme

heyfletch commented 4 months ago

I don't understand your comment, "don't use config ts for the url to scrawl, only modify json name in config.ts put the url to crawl in tocrawl.csv". Could you please explain that more? Meaning, which file do I modify, and how do I modify it to crawl "tocrawl.csv"

julian-passebecq commented 4 months ago

before, you had to put the url to scrawl in config.ts. now, you only put the name of the json in config.ts, the url written there doesn't matter. yo modify tocrawl, you can do it manually, just respect the 3 columns format i used. me i use allcrawl.csv, i put all my url, then i use crawlee.py, to only input in tocrawl.csv 1 topic of several url to crawl

so to sum up : it will automatically look for urls to crawl in tocrawl.csv, just modify the json name in config.ts u dont have to use allcrawl.csv, crawlee.py and txt.py (to convert the json in txt)

however i didn't fix the missing credentials issue, so if the website requires credentials it doesnt work, at least on some sites. also im not convinced that custom gpt accept well json, so i convert it into txt

EDIT : the tocrawl.csv to use is the one in the src folder

heyfletch commented 4 months ago

Ok, thanks for the clarification!!

julian-passebecq commented 4 months ago

so, did you try my multi url version ? what do u think ?

heyfletch commented 4 months ago

Haven't tried it yet, but if I do, I'll let you know. :)