Closed davidtan-tw closed 11 months ago
Managed to fix error by importing the right function (semantic_similarity
) from the right package (prompttools.utils
). See fix:
Would it be right to say that the OpenAIChatExperiment.ipynb demo was importing the wrong function (prompttools.utils.similarity.evaluate
)?
And as an aside, why is the semantic similarity between "George Washington" and "George Washinton" ranging between 0.14
to 0.35
? I would expected something like 0.99
or even 1.0
Alright, so it turns out that was because I should have passed in a List
of expected results (expected=["George Washington"] * 4
) instead of a string:
I think it would be better if the function threw an error due to type mismatch, telling the user that it was expecting List[str]
instead of str
, rather than failing silently with random semantic similarity scores. What do you think?
(As an aside, the reference code in the repo (https://github.com/hegelai/prompttools/blob/main/examples/notebooks/OpenAIChatExperiment.ipynb) works as expected but the Colab notebook is outdated and will throw an error)
Hey David, thanks for trying prompttools and mentioning these issues. I believe everything's up to date for that example on the HEAD of main
https://github.com/hegelai/prompttools/blob/main/examples/notebooks/OpenAIChatExperiment.ipynb
We'll make the error handling better.
Hi folks, thanks for creating this tool.
I'm trying out
prompttools
and was following the introductory example (OpenAIChatExperiment.ipynb) listed on the quickstart page and encountered this error. I can reproduce the error locally and on the provided Colab notebook🐛 Describe the bug
This is the line that raises an error:
And this is the error:
TypeError: evaluate() missing 2 required positional arguments: 'response' and 'metadata'