jackmpcollins / magentic

Seamlessly integrate LLMs as Python functions
https://magentic.dev/
MIT License
2k stars 96 forks source link

ValueError raised for generate_verification_questions in Chain of Verification example notebook #33

Closed jackmpcollins closed 1 year ago

jackmpcollins commented 1 year ago

@bitsnaps

https://github.com/jackmpcollins/magentic/issues/31#issuecomment-1737785037

I wouldn't fire a new issue for this error:

ValueError: String was returned by model but not expected. You may need to update your prompt to encourage the model to return a specific type.

at this line:

verification_questions = await generate_verification_questions(query, baseline_response)

is this issue related to that one? P.S. Here is the output of the previous Notebook cell:

Sure, here are a few politicians born in New York, New York:
1. Hillary Clinton
2. Donald Trump
3. Franklin D. Roosevelt
4. Rudy Giuliani
5. Theodore Roosevelt
jackmpcollins commented 1 year ago

@bitsnaps A few questions to help debug

  1. Have you made any changes to the example notebook or are you running it as-is?
  2. Was this a one-off error, happens some times but not others, or happens every time?
  3. Have you changed the model by setting the MAGENTIC_OPENAI_MODEL environment variable or setting the model parameter in the prompt decorator?
bitsnaps commented 1 year ago
jackmpcollins commented 1 year ago

Can you share a small amount of code that reproduces the issue please so I can run it myself.

This error should only be possible for functions with a union return type that doesn't include str e.g. list[str] | bool or that have functions provided. So it should only happen with generate_verification_questions if the return type has been changed from list[str].

Also it's worth restarting your notebook and running it top-to-bottom to make sure this is not due to the notebook using a cached version of these functions.

bitsnaps commented 1 year ago

Can you share a small amount of code that reproduces the issue please so I can run it myself.

This error should only be possible for functions with a union return type that doesn't include str e.g. list[str] | bool or that have functions provided. So it should only happen with generate_verification_questions if the return type has been changed from list[str].

Also it's worth restarting your notebook and running it top-to-bottom to make sure this is not due to the notebook using a cached version of these functions.

Here is simplest reproducible code in colab, the only thing different I did is to uninstall the pre-installed tensorflow before installing magnetic just to avoid a potential conflicting with some peer-dependencies (of course you'll need to provide the openai key...)

jackmpcollins commented 1 year ago

@bitsnaps The colab notebook works for me! I've restarted+run it several times with no issues.

The original error message String was returned by model but not expected. when it comes from function

@prompt("Create a Superhero named {name}.")
def create_superhero(name: str) -> Superhero:
    ...

indicates that the model being used does not support function calling, because when there is a single structured output type magentic forces function calling to return this.

I notice you have openai.api_base commented out. If you set this, for example to use Azure OpenAI Service, make sure you are using a model that supports function calling.

Maybe it's worth you trying the OpenAI weather function calling example using the openai python package directly to ensure that function calling is working at that level.

bitsnaps commented 1 year ago

I tried the same notebook now using GPT-4 and it worked from the first shot, here is the output:

Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Beast'])

You can close the issue.