memgraph / odin

MIT License
577 stars 14 forks source link

Plans to Support other LLMs? #1

Closed DeveloperPaul123 closed 8 months ago

DeveloperPaul123 commented 1 year ago

ChatGPT is great, but it would be nice to also have the option to use local models via something like llama.cpp. Is this something that you are interested in incorporating?

sammcj commented 1 year ago

Came to ask the same, it would be great if you didn't have to use "Open"AI's products with your data, especially when we're able to run some pretty neat and fine-tunable models locally / on our own servers now with decent APIs, libraries etc...

ishaan-jaff commented 1 year ago

Hi @DeveloperPaul123 @sammcj I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm

TLDR: We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo. You can use our proxy server or spin up your own proxy server using LiteLLM

Usage

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# falcon call
response = completion(model="falcon-40b", messages=messages)
DeveloperPaul123 commented 1 year ago

@ishaan-jaff

This is not something I'm interested in. I want to run the LLM locally, directly on my local machine without having to spin up my own server; even if it's a local server.

ishaan-jaff commented 1 year ago

@DeveloperPaul123 I'd recommend checking out ollama for this https://ollama.ai/

AlexIchenskiy commented 8 months ago

Thanks everyone for your input! Currently, adding support for other LLMs isn't possible in ODIN. I've raised Issue #7 to track this feature request. You're welcome to contribute to implementing this feature!

Closing this PR as a duplicate of Issue #7.