xiaozhi-agent / pylmkit

PyLMKit: 帮助用户快速构建实用的大模型应用
https://www.yuque.com/txhy/pylmkit
Apache License 2.0
36 stars 4 forks source link

new LLM #4

Closed alaisgood closed 7 months ago

alaisgood commented 7 months ago

我参考你的code自己也写了一个LLM class, 但是运行时显示 ChatNvidia.invoke() missing 1 required positional arguments: 'query' \pylmkit\app\roleplay.py", line 31, in invoke response = self.model.invoke() File "C:\Users\dscshap1033\Documents\VSPython\python-3.11.8-embed-amd64\Lib\site-packages\pylmkit\core\base.py", line 322, in run result = obj(**self.input_kwargs)

请问新增LLM,除了pylmkit\llms下增加了class,还涉及什么地方的修改?

class ChatNvidia(object): def init(self, nvidia_apikey="", model="meta/llama2-70b", temperature=0.6, top_p=0.7): self.api_key = os.environ.get("nvidia_apikey", nvidia_apikey) self.model = model self.temperature = temperature self.top_p = top_p

self.max_tokens = max_tokens

def invoke(self, query, system_prompt="You need to answer user questions", **kwargs):
    client = OpenAI(base_url = "https://integrate.api.nvidia.com/v1", api_key = self.api_key) # 请填写您自己的APIKey
    response = client.chat.completions.create(
        model=self.model,  # 填写需要调用的模型名称
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": query}
        ],
        stream=False,
        temperature=self.temperature,
        top_p=self.top_p,
        max_tokens=1024,
        **kwargs
    )
    return response.choices[0].message.content
alaisgood commented 7 months ago

找到原因了在from pylmkit.llms init.py里少写了一个括号。 你看有没有必要再增加一个Nvidia的模型。 import os from openai import OpenAI

class ChatNvidia(object): def init(self, nvidia_apikey="", model="meta/llama2-70b", temperature=0.6, top_p=0.7): self.api_key = os.environ.get("nvidia_apikey", nvidia_apikey) self.model = model self.temperature = temperature self.top_p = top_p

self.max_tokens = max_tokens

def invoke(self, query, system_prompt="You need to answer user questions", **kwargs):
    client = OpenAI(base_url = "https://integrate.api.nvidia.com/v1", api_key = self.api_key) # 请填写您自己的APIKey
    response = client.chat.completions.create(
        model=self.model,  # 填写需要调用的模型名称
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": query}
        ],
        stream=False,
        temperature=self.temperature,
        top_p=self.top_p,
        max_tokens=1024,
        **kwargs
    )
    return response.choices[0].message.content

def stream(self, query, system_prompt="You need to answer user questions", **kwargs):
    client = OpenAI(base_url = "https://integrate.api.nvidia.com/v1", api_key = self.api_key) # 请填写您自己的APIKey
    response = client.chat.completions.create(
        model=self.model,  # 填写需要调用的模型名称
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": query}
        ],
        stream=True,
        temperature=self.temperature,
        top_p=self.top_p,
        max_tokens=1024,
        **kwargs
    )
    for chunk in response:
        yield chunk.choices[0].delta.content    
xiaozhi-agent commented 7 months ago

你可以以贡献者身份,新建一个分支,把你本地新增测试成功的代码上传GitHub(包括调用测试样例,说明新增内容),提交pull请求,审核通过就可以在新版中发布。

---Original--- From: @.> Date: Thu, Mar 28, 2024 12:37 PM To: @.>; Cc: @.***>; Subject: Re: [52phm/pylmkit] new LLM (Issue #4)

找到原因了在from pylmkit.llms init.py里少写了一个括号。 你看有没有必要再增加一个Nvidia的模型。 import os from openai import OpenAI

class ChatNvidia(object): def init(self, nvidia_apikey="", model="meta/llama2-70b", temperature=0.6, top_p=0.7): self.api_key = os.environ.get("nvidia_apikey", nvidia_apikey) self.model = model self.temperature = temperature self.top_p = top_p

self.max_tokens = max_tokens

def invoke(self, query, system_prompt="You need to answer user questions", kwargs): client = OpenAI(base_url = "https://integrate.api.nvidia.com/v1", api_key = self.api_key) # 请填写您自己的APIKey response = client.chat.completions.create( model=self.model, # 填写需要调用的模型名称 messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": query} ], stream=False, temperature=self.temperature, top_p=self.top_p, max_tokens=1024, kwargs ) return response.choices[0].message.content def stream(self, query, system_prompt="You need to answer user questions", kwargs): client = OpenAI(base_url = "https://integrate.api.nvidia.com/v1", api_key = self.api_key) # 请填写您自己的APIKey response = client.chat.completions.create( model=self.model, # 填写需要调用的模型名称 messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": query} ], stream=True, temperature=self.temperature, top_p=self.top_p, max_tokens=1024, kwargs ) for chunk in response: yield chunk.choices[0].delta.content
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you are subscribed to this thread.Message ID: @.***>