jamesturk / scrapeghost

👻 Experimental library for scraping websites using OpenAI's GPT API.
https://jamesturk.github.io/scrapeghost/
Other
1.43k stars 87 forks source link

Make API backend pluggable to allow for non-OpenAI models #18

Open jamesturk opened 1 year ago

jamesturk commented 1 year ago

This seems like it'll be the most important task to make this more viable for people.

Alternative models will be cheaper, potentially much faster, allow running on someone's own hardware (LLaMa), and allow for more experimentation (e.g. models that are trained on HTML->JSON).

Quite a few models are attention free, which would remove the token limit altogether.

Models

OpenAssistant

No API as of June 2023, their FAQ makes it sound unlikely.

Cohere

TBD, commenter below says it didn't work well, haven't evaluated.

Anthropic Claude

100k limits added in May, as soon as I get access this will be my guinea pig to add support for pluggable models.

Others

Please add comments below if you've tried this approach with others that have an API.

irux commented 1 year ago

I am actually curious if this would work with other kind of models. I always had the idea to try to use Bert for this kind of things, but I think a following instructions model would be needed for a good performance.

clarkmcc commented 1 year ago

Yes, I'd love to see this on the new alpaca models. The major problem that I see (not understanding how this prompts OpenAI under the hood) is that successful prompts are much trickier with models like alpaca and llama.

walking-octopus commented 1 year ago

Hmm.... Didn't Cohere make their models free to call (albeit with a rate limit)? This could make using this much much more viable for scraping a few small pages.

EDIT: Their models seem to be too weird for this, I've tried.

daankortenbach commented 1 year ago

I'd love to see support for OpenAssistent models.

walking-octopus commented 1 year ago

I'd love to see support for OpenAssistent models.

Perhaps someone could train some seq2seq model precisely for this task...

jamesturk commented 1 year ago

If anyone wants to work on this let me know, I'd love to discuss approaches

jamesturk commented 1 year ago

The groundwork for this is there after some recent refactors, I am hoping to get access to Claude soon as with its 100k token limit it'd be amazing to see how it performs. Updating the parent issue w/ the status of some other models as well.

cpoptic commented 1 year ago

How about adding Falcon 7B and/or 40B LLM model support?

Sunishchal commented 1 year ago

@jamesturk I'm very interested in support for Claude 100k. Happy to work on a PR for this if you're welcoming contributors.

ishaan-jaff commented 1 year ago

@jamesturk expanded non OpenAI model coverage in this PR #55

jamesturk commented 1 year ago

Update to those tracking this: In general the approach laid out in PR #55 seems like a great way to go, relying on a well maintained library that abstracts away differences between these models saves this library from needing to reinvent that wheel that others have tackled. I'd been toying with a lightweight version of the same, but hadn't done research on what else was out there yet.

As noted on PR #55 I don't think it's quite ready to be merged in yet, there are other parts of the code base that assume OpenAI that I'll want to check on. (I also just heard about litellm half an hour ago & want to do a tiny bit of due diligence before adding the dependency 😄 )

cornpo commented 10 months ago

This works well with oobabooga's OpenAi extension with Mistral7B and Phind34B.

https://github.com/briansunter/logseq-plugin-gpt3-openai