The bug
Using "gpt-3.5-turbo" won't raise error but not streaming result #244
Using "gpt-3.5-turbo-0613" raise this error:
Traceback (most recent call last):
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 111, in run
await self.visit(self.parse_tree, VariableStack([self.program._variables], self))
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 535, in visit
visited_children.append(await self.visit(child, variable_stack, inner_next_node, inner_next_next_node, inner_prev_node, node, parent_node))
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 500, in visit
command_output = await command_function(*positional_args, **named_args)
File "d:\test\.venv\lib\site-packages\guidance\library\_assistant.py", line 13, in assistant
return await role(name="assistant", hidden=hidden, _parser_context=_parser_context)
File "d:\test\.venv\lib\site-packages\guidance\library\_role.py", line 17, in role
new_content += await parser.visit(
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 535, in visit
visited_children.append(await self.visit(child, variable_stack, inner_next_node, inner_next_next_node, inner_prev_node, node, parent_node))
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 535, in visit
visited_children.append(await self.visit(child, variable_stack, inner_next_node, inner_next_next_node, inner_prev_node, node, parent_node))
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 256, in visit
visited_children = [await self.visit(child, variable_stack, next_node, next_next_node, prev_node, node, parent_node) for child in node]
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 256, in <listcomp>
visited_children = [await self.visit(child, variable_stack, next_node, next_next_node, prev_node, node, parent_node) for child in node]
File "d:\test\.venv\lib\site-packages\guidance\_program_executor.py", line 353, in visit
command_output = await command_function(*positional_args, **named_args)
File "d:\test\.venv\lib\site-packages\guidance\library\_gen.py", line 163, in gen
async for resp in gen_obj:
File "d:\test\.venv\lib\site-packages\guidance\llms\_openai.py", line 186, in stream_then_save
async for curr_out in gen:
File "d:\test\.venv\lib\site-packages\guidance\llms\_openai.py", line 45, in add_text_to_chat_mode_generator
c['text'] = c['delta']['content']
File "d:\test\.venv\lib\site-packages\openai\openai_object.py", line 71, in __setitem__
raise ValueError(
ValueError: You cannot set text to an empty string. We interpret empty strings as None in requests.You may set {
"delta": {
"content": "",
"role": "assistant"
},
"finish_reason": null,
"index": 0
}.text = None to delete the property
Error in program: You cannot set text to an empty string. We interpret empty strings as None in requests.You may set {
"delta": {
"content": "",
"role": "assistant"
},
"finish_reason": null,
"index": 0
}.text = None to delete the property
To Reproduce
Give a full working code snippet that can be pasted into a notebook cell or python file. Make sure to include the LLM load step so we know which model you are using.
import guidance
guidance.llm = guidance.llms.OpenAI("gpt-3.5-turbo-0613")
program = guidance('''
{{#system~}}
You are helpful assistant.
{{~/system}}
{{#user}}
Write me a long poem.
{{/user}}
{{#assistant~}}
{{gen "answer" save_prompt="prompt" max_tokens=300}}
{{~/assistant}}
''') # type: ignore
pos = 0
for p in program(stream=True, silent=True, caching=False):
val = p.get("answer", "")
print(val[pos:], end="", flush=True)
pos = len(val)
System info (please complete the following information):
The bug Using "gpt-3.5-turbo" won't raise error but not streaming result #244
Using "gpt-3.5-turbo-0613" raise this error:
To Reproduce Give a full working code snippet that can be pasted into a notebook cell or python file. Make sure to include the LLM load step so we know which model you are using.
System info (please complete the following information):
guidance.__version__
):