Closed jackmpcollins closed 1 year ago
@bitsnaps A few questions to help debug
MAGENTIC_OPENAI_MODEL
environment variable or setting the model
parameter in the prompt
decorator?GPT-3.5-turbo
)Can you share a small amount of code that reproduces the issue please so I can run it myself.
This error should only be possible for functions with a union return type that doesn't include str
e.g. list[str] | bool
or that have functions
provided. So it should only happen with generate_verification_questions
if the return type has been changed from list[str]
.
Also it's worth restarting your notebook and running it top-to-bottom to make sure this is not due to the notebook using a cached version of these functions.
Can you share a small amount of code that reproduces the issue please so I can run it myself.
This error should only be possible for functions with a union return type that doesn't include
str
e.g.list[str] | bool
or that havefunctions
provided. So it should only happen withgenerate_verification_questions
if the return type has been changed fromlist[str]
.Also it's worth restarting your notebook and running it top-to-bottom to make sure this is not due to the notebook using a cached version of these functions.
Here is simplest reproducible code in colab, the only thing different I did is to uninstall the pre-installed tensorflow
before installing magnetic
just to avoid a potential conflicting with some peer-dependencies (of course you'll need to provide the openai key...)
@bitsnaps The colab notebook works for me! I've restarted+run it several times with no issues.
The original error message String was returned by model but not expected.
when it comes from function
@prompt("Create a Superhero named {name}.")
def create_superhero(name: str) -> Superhero:
...
indicates that the model being used does not support function calling, because when there is a single structured output type magentic forces function calling to return this.
I notice you have openai.api_base
commented out. If you set this, for example to use Azure OpenAI Service, make sure you are using a model that supports function calling.
Maybe it's worth you trying the OpenAI weather function calling example using the openai python package directly to ensure that function calling is working at that level.
I tried the same notebook now using GPT-4 and it worked from the first shot, here is the output:
Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Beast'])
You can close the issue.
@bitsnaps
https://github.com/jackmpcollins/magentic/issues/31#issuecomment-1737785037