microsoft / lida

Automatic Generation of Visualizations and Infographics using Large Language Models
https://microsoft.github.io/lida/
MIT License
2.6k stars 266 forks source link

TypeError: llmx.generators.text.hf_textgen.HFTextGenerator() got multiple values for keyword argument 'provider' #57

Closed AIAnytime closed 6 months ago

AIAnytime commented 8 months ago

Thanks for creating this Library but I am sure "Testing" is not been the priority before releasing this. I just installed and followed the documentation and boom "The Error" while using the HF model.

Here is the complete error: [Traceback (most recent call last): File "C:\Users\aiany\OneDrive\Desktop\lida demo\test.py", line 8, in <module> text_gen = llm(provider="hf", model="uukuguy/speechless-llama2-hermes-orca-platypus-13b", device_map="auto") File "C:\Users\aiany\OneDrive\Desktop\lida demo\.venv\lib\site-packages\llmx\generators\text\textgen.py", line 75, in llm return HFTextGenerator(provider=provider, models=models, **kwargs) TypeError: llmx.generators.text.hf_textgen.HFTextGenerator() got multiple values for keyword argument 'provider'](url)

Just these 2 lines of code I have used:

`[from lida import llm

print("Import Successful!")

text_gen = llm("openai")

text_gen = llm(provider="hf", model="uukuguy/speechless-llama2-hermes-orca-platypus-13b", device_map="auto")](url)`

victordibia commented 8 months ago

Hi thanks for flagging this.

I just pushed a fix to llmx release (v 0.0.17a) that addresses this.

pip install llmx

Also ..

A Note on Using Local HuggingFace Models

While llmx can use the huggingface transformers library to run inference with local models, you might get more mileage from using a well-optimized server endpoint like vllm, or FastChat. The general idea is that these tools let you provide an openai-compatible endpoint but also implement optimizations such as dynamic batching, quantization etc to improve throughput. The general steps are:

from llmx import  llm
hfgen_gen = llm(
    provider="openai",
    api_base="http://localhost:8000",
    api_key="EMPTY,
)
...
roy-sub commented 6 months ago
!pip3 install --upgrade llmx==0.0.17a0 

# Restart the colab session

from lida import Manager
from llmx import  llm
text_gen = llm(provider="hf", model="uukuguy/speechless-llama2-hermes-orca-platypus-13b", device_map="auto") 
lida = Manager(text_gen=text_gen)