irgolic / AutoPR

Run AI-powered workflows over your codebase
https://discord.gg/ykk7Znt3K6
MIT License
1.26k stars 83 forks source link

Run non-OpenAI models #32

Open irgolic opened 1 year ago

irgolic commented 1 year ago

So far we've only used GPT-4 and GPT-3.5, the next step is to try it on models that are locally hosted.

I'm not sure exactly how to go about this; as this is a Github Action, does Github have GPUs in their runners? How do we properly write it to work with custom runners? Could we rent GPUs on something like vast.ai? Are there any grants available for free computational resources to run AutoPR on?

I'd love to run a custom Github runner with my own GPU, and run tests with it.

Essentially, these two methods need to use a completion_func decoupled from OpenAI's functions. https://github.com/irgolic/AutoPR/blob/main/autopr/services/rail_service.py#L53-L125

ghost commented 1 year ago

this seems relevant here, but more so a separate issue: https://github.com/go-gitea/gitea/issues/13539 https://blog.gitea.io/2023/03/hacking-on-gitea-actions/ https://blog.gitea.io/2022/12/feature-preview-gitea-actions/ https://gitea.com/gitea/tea/actions

also: https://github.com/nektos/act (linked from https://gitea.com/gitea/act_runner)

guidevops commented 1 year ago

What I did was create an API that follows the OpenAI api standard, this way I only needed to set openai.api_base = 'my_url' and it worked fine. There is also the LocalAI project that follows the same pattern. https://github.com/go-skynet/LocalAI

irgolic commented 1 year ago

Hey, thanks for the suggestion!

I think I'd rather move away from using the openai library in the long run (except potentially in its own self-contained repo implementation). Last time I talked to @ShreyaR she pointed me to the mainfest library, which looks like a really clean provider-agnostic solution for specifying LLM models. I believe this would interface with our config yaml really well too.