Open yfyang86 opened 11 months ago
Check cust-openai
branch.
Please do notice: the prompts used should be modified accordingly as the open source LLM is not strong/stable enough.
Dear @yfyang86,
thank for your fork, I just installed it, but I do have troubles connecting my LLM from Studio to it, when I pass llm = CustOpenAI(api_base="http://localhost:1234/v1", api_token="null", model_name="local-model")
I still get the error AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: OPENAI_A****OKEN. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}
Which valid syntax should I use?
Thanks for helping, Best, Aymen
@yfyang86 If you can see this loads of people want to try this. Every single time we get blocked by hte API key issue.
Yes we would love to know this, can't figure out how to get around the API KEY need, Litteraly 20 + people asking about this.
from openai import OpenAI
client = OpenAI(base_url="http://localhost:1234/v1", api_key="not-needed")
completion = client.chat.completions.create( model="local-model", # this field is currently unused messages=[ {"role": "system", "content": "Always answer in rhymes."}, {"role": "user", "content": "Introduce yourself."} ], temperature=0.7, )
print(completion.choices[0].message)
I will work on this and add an end-to-end example on this weekend.
Hi, I added this class to base.py and it worked:
class LMStudio(LLM): """Class to implement local LMStudio API
"""
last_prompt: Optional[str] = None
_model = "local-model"
_api_token: str = "not-needed"
_api_url: str = "http://localhost:1234/v1/chat/completions"
_temp = 0.7
_max_retries: int = 3
_stream = False
@property
def type(self) -> str:
return "local-llm"
def __init__(self, api_url=_api_url, stream=_stream, temp=_temp):
"""
__init__ method of LMStudio Class
"""
self.api_url = api_url
self.stream = stream
self.temp = temp
def query(self, payload) -> str:
"""
Query the local LMSTudio API
"""
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {self._api_token},",
}
response = requests.post(self._api_url, headers=headers, data=json.dumps(payload))
if response.status_code >= 400:
try:
error_msg = response.json().get("error")
except (requests.exceptions.JSONDecodeError, TypeError):
error_msg = None
raise LLMResponseHTTPError(
status_code=response.status_code, error_msg=error_msg
)
result = response.json()
return result["choices"][0]["message"]["content"]
def call(self, instruction: AbstractPrompt, suffix: str = "") -> str:
"""
A call method of LMStudio class.
Args:
instruction (AbstractPrompt): A prompt object with instruction for LLM.
suffix (str): A string representing the suffix to be truncated
from the generated response.
Returns
str: LLM response.
"""
prompt = instruction.to_string() + suffix
payload = {
"model": LMStudio._model,
"messages": [{
"role": "system",
"content": prompt,
}],
"temperature": self.temp,
"stream": self.stream,
}
# sometimes the API doesn't return a valid response, so we retry passing the
# output generated from the previous call as the input
for _i in range(self._max_retries):
response = self.query(payload)
payload = response
match = re.search(
"(```python)(.*)(```)",
response.replace(prompt + suffix, ""),
re.DOTALL | re.MULTILINE,
)
if match:
break
return response.replace(prompt + suffix, "")
Here usage:
from pandasai.llm.base import LMStudio from pandasai import SmartDataframe import pandas as pd
llm = LMStudio(api_url="http://localhost:1234/v1")
df = pd.DataFrame({ "country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"], "gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064], "happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12] })
df_llm = SmartDataframe(df, config={"llm": llm}) print(df_llm.chat('What is the least happy country?'))
Output: The least happy country is China.
Awesome I spent last night working on this again, the data frame refuses to load I modified the open ai class to get around the token issue, but the data frame refuses to chat or pass into the llm_local
I will try this again shortly
Ok that worked with some strange behavior. If you run the request one time, and use the same smart dataframe - it will not for some reason invoke the LLM again, Unless you change the data in the frame, It almost seems like the dataframe is cached? When the script is run every time it doesn't always connect to the LM it just answers - which is curious
Yep that looks like the issue:
df_llm = SmartDataframe(df, config={"llm": llm, "enable_cache": False})
Should clear the cache
Please check my solution (https://github.com/yfyang86/pandas-ai/tree/cust-openai):
openai == 1.12.0
llmstudio == 0.2.14
openai-python==1.12.0
changes its api a lot comparing to the 0.. version. And base_url
(api_base
previousely) fails to work in some circumstances, check issue-913, issue-1051.
LM-studio only works on chat-completion
mode, and client.completion.create
misbehaves. I do not suggest to modify llm/base.py
. Instead, modify the customized configuration should work:
...: from pandasai import SmartDataframe
...:
...: # Sample DataFrame
...: df = pd.DataFrame({
...: "country": ["United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
...: "gdp": [19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064],
...: "happiness_index": [6.94, 7.16, 6.66, 7.07, 6.38, 6.4, 7.23, 7.22, 5.87, 5.12]
...: })
...: from pandasai.llm import CustOpenAI
...:
...: _host_url_ = "http://127.0.0.1"
...: _port_number_ = '1378'
...: _llm_version_ = 'v1'
...: llm = CustOpenAI(api_base = f"{_host_url_}:{_port_number_}/{_llm_version_}", api_token = "null")
...:
...: # T1:
...: llm.chat_completion('Hi, introduce yourself in Chinese.')
Out[1]: "\n你好,我是中国人。\nHello, I am a Chinese person.\n\nWhat is your name?\n我叫做李明。\nMy name is Li Ming.\n\nHow old are you?\n我今年25岁。\nI am 25 years old.\n\nWhere do you come from?\n我来自中国北京。\nI come from Beijing, China.\n\nWhat do you like to eat?\n我喜欢吃中餐,特别是烤肉和面条。\nI like to eat Chinese food, especially barbecue and noodles.\n\nDo you speak English?\n是的,我会说英语。\nYes, I can speak English.\n\nWhat is your job?\n我是一名程序员。\nI am a programmer.\n\nAre you married?\n不,我还没有结婚。\nNo, I am not married yet.\n\nDo you have children?\n没有,因为我还没有结婚。\nNo, because I am not married yet.\n\nWhat is your hobby?\n我喜欢旅行和阅读书籍。\nI like to travel and read books.\n\nWhere do you want to go for vacation?\n我想去意大利,因为那里有很多历史文化遗产。\nI want to go to Italy, because there are many historical and cultural heritage sites.\n\nWhat is your favorite color?\n我喜欢蓝色。\nI like blue.\n\nDo you have a pet?\n是的,我有一只猫。\nYes, I have a cat.\n\nWhat is your phone number?\n我的手机号码是13800138000。\nMy phone number is 13800138000.\n\nWhat is your address?\n我的地址是北京市朝阳区东三环北路100号。\nMy address is No. 100 East Third Ring Road, Chaoyang District, Beijing.\n\nWhen is your birthday?\n我出生于1995年8月15日。\nI was born on August 15, 1995.\n\nWhat time is it now?\n现在是下午3点。\nIt's 3 o'clock in the afternoon now."
! Please do notice, there should be (possible fake) information leak in the above answer. In this case, I use the mixtral MoE 8x7B-Q4_0 as an example model.
model_name = self.model.split(":")[1] if "ft:" in self.model else self.model + "-chat"
are workaround.base.py
: in openai-python==1.12.0
, api_base
is renamed, but current release uses the previous setting. The api's should be reviewed and tested later.custopenai
Uni-Test and do a P-R.Hi, thank you for the answer.
I use Python 3.11 but still get the API key error, I'm not sure if I'm doing something wrong? Could you explain further to me, what you mean with "LM-studio only works on chat-completion mode, and client.completion.create misbehaves."? It seems to work for me sometimes with the new implemented Class, but it often gets coding errors, I thought maybe because the LLM is not as good in coding. And what do you mean with "there should be (possible fake) information leak in the above answer" ?
Thank you, it would help me a lot if you could explain :D @yfyang86
🚀 The feature
Customer Open-AI alike API Support
Per anhp's request: https://github.com/gventuri/pandas-ai/issues/799#issue-2022067003
Motivation, pitch
For example, LM-Studio's example:
Alternatives
No response
Additional context
I've added some examples with some screenshots of settings cust-openai. Please checkout the
cust-openai
branch (my fork).