hegelai / prompttools

Open-source tools for prompt testing and experimentation, with support for both LLMs (e.g. OpenAI, LLaMA) and vector databases (e.g. Chroma, Weaviate, LanceDB).
http://prompttools.readthedocs.io
Apache License 2.0
2.65k stars 230 forks source link

AzureOpenAIServiceExperiment notebook: TypeError: Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') arguments to be given #116

Closed baswenneker closed 9 months ago

baswenneker commented 9 months ago

🐛 Describe the bug

I'm trying to get the AzureOpenAIServiceExperiment notebook runnning (with my own Azure credentials) but it gives me the following error when running:

experiment.run()
experiment.visualize()

The error is as follows, but the script keeps running so I have to interrupt it:

Exception in thread Thread-4 (_process_queue):
Traceback (most recent call last):
  File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
    self.run()
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/sentry_sdk/integrations/threading.py", line 72, in run
    reraise(*_capture_exception())
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/sentry_sdk/_compat.py", line 115, in reraise
    raise value
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/sentry_sdk/integrations/threading.py", line 70, in run
    return old_run_func(self, *a, **kw)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/threading.py", line 975, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/prompttools/requests/request_queue.py", line 37, in _process_queue
    self._do_task(fn, args)
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/prompttools/requests/request_queue.py", line 48, in _do_task
    res = self._run(fn, args)
          ^^^^^^^^^^^^^^^^^^^
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
           ^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Cellar/python@3.11/3.11.3/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/prompttools/requests/request_queue.py", line 59, in _run
    result = fn(**args)
             ^^^^^^^^^^
  File "/Users/bas/Development/HeadingFWD/prompttools/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 269, in wrapper
    raise TypeError(msg)
TypeError: Missing required arguments; Expected either ('model' and 'prompt') or ('model', 'prompt' and 'stream') arguments to be given

I've got Version: 0.0.43 installed (from pip). I started a clean virtualenv and only installed prompttools.

Any ideas?

NivekT commented 9 months ago

Hi @baswenneker, thanks for opening this issue. I will have a look today

NivekT commented 9 months ago

The issue is due to a recent change of the Azure API within the openai library. Specifically, it used to use the name keyword name "engine" to label Azure deployment, but now they have unified to use "model" just like normal OpenAI model.

I have updated the source code to align our repo with the latest API. I updated the notebook and tested the change as well. Please re-install our library from source (GitHub) and make sure your openai library has a version >1.0.

Let me know if we can help you with anything else, or re-open this issue. We always welcome feedback and suggestions!

quorak commented 7 months ago

You likely forgot to also update: https://github.com/hegelai/prompttools/blob/5a807328435d269d7ed17b53f86283e116e08244/prompttools/experiment/experiments/openai_completion_experiment.py#L156

As it is referenced in the examples: https://github.com/hegelai/prompttools/blob/5a807328435d269d7ed17b53f86283e116e08244/examples/notebooks/AzureOpenAIServiceExperiment.ipynb