Open victordibia opened 10 months ago
I have started working on this
Great. Let us discuss your findings here so far.
On my end, I have been trying local huggingface models
For example, I have found this hermes 13b to decent performance in terms of goal generation but limited success with visualization generation.
I'll share updates.
In the mean time, here is how I am testing local models with lida. I have updated the readme.
LIDA uses the llmx library as its interface for text generation. llmx supports multiple local models including HuggingFace models. You can use the huggingface models directly (assuming you have a gpu) or connect to an openai compatible local model endpoint e.g. using the excellent vllm library.
from lida import llm
text_gen = llm(provider="hf", model="uukuguy/speechless-llama2-hermes-orca-platypus-13b", device_map="auto")
lida = Manager(llm=text_gen)
# now you can call lida methods as above e.g.
sumamry = lida.summarize("data/cars.csv") # ....
from lida import Manager, TextGenerationConfig , llm
model_name = "uukuguy/speechless-llama2-hermes-orca-platypus-13b"
model_details = [{'name': model_name, 'max_tokens': 2596, 'model': {'provider': 'openai', 'parameters': {'model': model_name}}}]
# assuming your vllm endpoint is running on localhost:8000
text_gen = llm(provider="openai", api_base="http://localhost:8000/v1", api_key="EMPTY", models=model_details)
lida = Manager(text_gen = text_gen)
I was thinking Langchain will be useful here. Interesting to see what you are doing with llmx.
Do you guys consider using https://mistral.ai/news/announcing-mistral-7b/ ?
@victordibia I think the code below in llmx library cause this error. Line 47 and 48. You are adding "provider" and "models" to the kwargs and you also have it as an argument.
kwargs["provider"] = kwargs["provider"] if "provider" in kwargs else provider
kwargs["models"] = kwargs["models"] if "models" in kwargs else models
@victordibia I got an error while downloading other models from hugging face
This is the code
This is the error
Solution
Add new argument "offload_folder" to self.model in llmx package
Hello, I am having some trouble Loading "ehartford/dolphin-2.5-mixtral-8x7b" Has anyone tried this? Any help or steps you can provide is much appreciated.
Hi,
I have not tested with the mixtral model series. I'd suggest attempting to use vllm to setup an open ai compatible server and then connect to that using the openai approach. Mixtral is supported on vllm.
Let me know how it goes.
Do you guys consider using https://mistral.ai/news/announcing-mistral-7b/ ?
this works for summary generation not for graphs. did you find anything else working better ?
What
Local models (e.g. LLAMA based models available via HuggingFace in the 7B or 13B size classes) offer multiple benefits (e.g., can be finetuned/adapted, run locally etc). While LIDA has been mostly tested with OpenAI models, more work is needed to test workflows and performance for HF models.
Work Items