MadcowD / ell

A language model programming library.
http://docs.ell.so/
MIT License
5.31k stars 311 forks source link

AttributeError: 'Groq' object has no attribute 'beta' when using ELL with Groq client and a Complex Response Format #323

Open idvorkin opened 1 month ago

idvorkin commented 1 month ago

While the groq client works if you use a simple format, when you pass a complex format, it fails. If this isn't supported, could we include a better error message?

E.g.

@ell.complex(
        model="llama-3.2-90b-vision-preview",
        response_format=ImageRecognitionResult,
)  

Results in:

  File "/Users/idvorkin/gits/nlp/.venv/lib/python3.12/site-packages/ell/lmp/_track.py", line 118, in tracked_func
    else func_to_track(*fn_args, _invocation_origin=invocation_id, **fn_kwargs, )
         │              │                           │                └ {}
         │              │                           └ 'invocation-3c24dedfd205432dd20590e9e132ad34'
         │              └ (<PIL.WebPImagePlugin.WebPImageFile image mode=RGB size=2000x1093 at 0x127FC60C0>,)
         └ <function prompt_recognize at 0x127fc89a0>
  File "/Users/idvorkin/gits/nlp/.venv/lib/python3.12/site-packages/ell/lmp/complex.py", line 68, in model_call
    (result, final_api_params, metadata) = provider.call(ell_call, origin_id=_invocation_origin, logger=_logger if should_log else None)
                                           │        │    │                   │                          │          └ False
                                           │        │    │                   │                          └ <function model_usage_logger_post_intermediate.<locals>.log_stream_chunk at 0x1330fdd00>
                                           │        │    │                   └ 'invocation-3c24dedfd205432dd20590e9e132ad34'
                                           │        │    └ EllCallParams(model='llama-3.2-90b-vision-preview', messages=[Message(role='system', content=[ContentBlock(text=
                                           │        │          You are ...
                                           │        └ <function Provider.call at 0x124971da0>
                                           └ <ell.providers.groq.GroqProvider object at 0x124c4ede0>
  File "/Users/idvorkin/gits/nlp/.venv/lib/python3.12/site-packages/ell/provider.py", line 121, in call
    call = self.provider_call_function(ell_call.client, final_api_call_params)
           │    │                      │        │       └ {'response_format': <class '__main__.ImageRecognitionResult'>, 'model': 'llama-3.2-90b-vision-preview', 'messages': [{'role':...
           │    │                      │        └ <groq.Groq object at 0x127f581a0>
           │    │                      └ EllCallParams(model='llama-3.2-90b-vision-preview', messages=[Message(role='system', content=[ContentBlock(text=
           │    │                            You are ...
           │    └ <function OpenAIProvider.provider_call_function at 0x124c47e20>
           └ <ell.providers.groq.GroqProvider object at 0x124c4ede0>
  File "/Users/idvorkin/gits/nlp/.venv/lib/python3.12/site-packages/ell/providers/openai.py", line 25, in provider_call_function
    return client.beta.chat.completions.parse
           └ <groq.Groq object at 0x127f581a0>

AttributeError: 'Groq' object has no attribute 'beta'

Similar to #253

alex-dixon commented 4 weeks ago

My guess is we expect beta to be the method we call specifically for structured outputs just like open ai.

If groq supports structured output by some other means then we need to update the groq provider’s “provider call function” method to return the appropriate function call for structured outputs.

If groq does not support structured outputs (or does, but only certain models) this same method could be updated to detect the call involves a structured output (ie result_format is not null), optionally check for a supported model, and throw a more informative error (groq does not support structured outputs/ only for these models but got ).

Trade offs here are maintenance as groq adds new features. If there’s an easy way to defer the source of truth about structured output support to the groq package itself we’d definitely prefer that.