Closed kanqgg closed 1 year ago
我也遇到这个问题,我发现这个问题可能发生在源代码参数传递过程中。在 ie_prompter.get_openai_result()中可以配置temperature,max_tokens,这些参数。其中默认的temperature=0,max_tokens=64。但当我采用默认时会报错“64 is greater than the maximum of 2 - 'temperature'”.然后我修改 ie_prompter.get_openai_result(temperature=0.2),它仍报错“64 is greater than the maximum of 2 - 'temperature”。当我修改 ie_prompter.get_openai_result(temperature=0.2,max_tokens=4),它居然报错“4 is greater than the maximum of 2 - 'temperature”?????是不是在源代码的参数传递过程中,错误的把max_tokens的参数当做temperature的参数传递出去了?
您好 可能是easyinstruct包里的bug 我们尽快联系相关同学修复
@GoooDte 非常感谢,我提供我的报错记录,希望对您的修改有帮助,我也会查看自己的设置是否有些错误: Traceback (most recent call last): File "D:/pythonTry/Try2/InstructionKGC/GPT_KGC/run.py", line 45, in main result = ie_prompter.get_openai_result(temperature=0.2,max_tokens=3, File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\easyinstruct\prompts\ie_prompt.py", line 276, in get_openai_result openai_result = super().get_openai_result(engine, temperature, max_tokens, top_p, frequency_penalty, presence_penalty) File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\easyinstruct\prompts\base_prompt.py", line 31, in get_openai_result response = openai.Completion.create( File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\openai\api_resources\completion.py", line 25, in create return super().create(*args, **kwargs) File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\openai\api_resources\abstract\engine_apiresource.py", line 153, in create response, , api_key = requestor.request( File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\openai\api_requestor.py", line 226, in request resp, got_stream = self._interpret_response(result, stream) File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\openai\api_requestor.py", line 619, in _interpret_response self._interpret_response_line( File "C:\Users\hinelon\AppData\Roaming\Python\Python38\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line raise self.handle_error_response( openai.error.InvalidRequestError: 3 is greater than the maximum of 2 - 'temperature'
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
大家查一下easyinstruct安装的是否是最新版的
@GoooDte 您好,我的easyinstruct是0.0.2版本,刚刚也装了最新版本的(显示最新的仍是0.0.2)。但还是有上述报错
easyinstruct 0.0.2 openai 0.27.4 hydra-core 1.3.2
可以先将easyinstruct的代码clone到本地采用python setup.py install的方式安装 https://github.com/zjunlp/EasyInstruct 这里面是最新的代码 pip包可能不是最新的 我们今天会更新
您好 我发现源代码中的问题了,easyinstruct\prompts\ie_prompt.py", line 276 的openai_result = super().get_openai_result(engine, temperature, max_tokens, top_p, frequency_penalty, presence_penalty) 这部分的参数是按默认顺序传入的。 但是easyinstruct\prompts\base_prompt.py", line 31定义的get_openai_result是(self, engine = "gpt-3.5-turbo", system_message: Optional[str] = "You are a helpful assistant.", temperature: Optional[float] = 0, max_tokens: Optional[int] = 64, top_p: Optional[float] = 1.0, n: Optional[int] = 1, frequency_penalty: Optional[float] = 0.0, presence_penalty: Optional[float] = 0.0 ) 如果按默认顺,easyinstruct\prompts\ie_prompt.py", line 276的参数值,会传入easyinstruct\prompts\base_prompt.py", line 31函数中错的位置上。
只需要在easyinstruct\prompts\ie_prompt.py", line 276 的openai_result函数中指定好参数值的传入对象即可:openai_result = super().get_openai_result( engine = engine,temperature = temperature,max_tokens = max_tokens,top_p = top_p,frequency_penalty = frequency_penalty,presence_penalty = presence_penalty )
感谢你的细心发现,这个bug我们已经发现了,在0.0.2之后的版本已经修复,可以先采用本地安装的方式使用最新的easyinstruct
@GoooDte 好的,感谢您们的debug,我之后会安装新的版本
easyinstruct新版本已发布 可以直接采用pip方式安装
Describe the question
Environment (please complete the following information):
Screenshots
Additional context