microsoft / lida

Automatic Generation of Visualizations and Infographics using Large Language Models
https://microsoft.github.io/lida/
MIT License
2.6k stars 266 forks source link

Is it possible to make this project available for locally deployed open source llm, such as chatglm2 #32

Closed tianciwudi closed 10 months ago

tianciwudi commented 10 months ago

Hello, because of the company's network policy, the service can only be deployed on an offline server. Is it possible for lida to call the locally deployed open source LLM, such as chatglm2, which provides an API call method similar to openai,as shown below import openai if name == "main": openai.api_base = "http://localhost:8000/v1" openai.api_key = "none" for chunk in openai.ChatCompletion.create( model="chatglm2-6b", messages=[ {"role": "user", "content": "你好"} ], stream=True ): if hasattr(chunk.choices[0].delta, "content"): print(chunk.choices[0].delta.content, end="", flush=True)

victordibia commented 9 months ago

Yes. One way to do this is to setup an openai compatible server endpoint and specify that as the text generator for lida

See readme here

from lida import Manager, TextGenerationConfig , llm

model_name = "uukuguy/speechless-llama2-hermes-orca-platypus-13b"
model_details = [{'name': model_name, 'max_tokens': 2596, 'model': {'provider': 'openai', 'parameters': {'model': model_name}}}]

# assuming your vllm endpoint is running on localhost:8000
text_gen = llm(provider="openai",  api_base="http://localhost:8000/v1", api_key="EMPTY", models=model_details)
lida = Manager(text_gen = text_gen)