kyegomez / tree-of-thoughts

Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
https://discord.gg/qUtxnK2NMf
Apache License 2.0
4.36k stars 364 forks source link

How to use LLM deployed locally? #95

Open deeper-coder opened 3 weeks ago

deeper-coder commented 3 weeks ago

Your current code uses OpenAI’s API key to access the LLM service by default. I’d like to switch it to use a local LLM, which I have deployed through LLaMA-Factory and is accessible via a local API, for example, at http://localhost:7788/v1/. Could you guide me on how to make this adjustment? Thank you!

Upvote & Fund

Fund with Polar

kyegomez commented 3 weeks ago

@deeper-coder we need function calling, if you can get a function calling model to work reliably then it will work. But you need a class with a run(task: str) or __call__(task: str) method to integrate into the ToTAgent class

deeper-coder commented 3 weeks ago

I plan to use llama3 70B and I noticed that in the OpenAIFunctionCaller class, you’ve implemented the run method as shown in the image. So, can I achieve my desired functionality by passing base_url = "http://localhost:7788/v1/" in **kwargs?

image

@deeper-coder we need function calling, if you can get a function calling model to work reliably then it will work. But you need a class with a run(task: str) or __call__(task: str) method to integrate into the ToTAgent class