haesleinhuepf / bia-bob

BIA Bob is a Jupyter+LLM-based assistant for interacting with image data and for working on Bio-image Analysis tasks.
BSD 3-Clause "New" or "Revised" License
93 stars 6 forks source link

ask_llm does not initialize LLM #190

Closed haesleinhuepf closed 3 months ago

haesleinhuepf commented 3 months ago

When calling ask_llm first, the underlying infrstructure is not initialized:

from bia_bob import ask_llm
ask_llm("Give...

Error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[2], line 1
----> 1 code = ask_llm("Give me a prolog example program involving birds, humans and lizards. Respond with the prolog code only.")
      2 code

File [C:\structure\code\bia-bob\src\bia_bob\_utilities.py:7](file:///C:/structure/code/bia-bob/src/bia_bob/_utilities.py#line=6), in ask_llm(prompt, image, chat_history)
      5 """Ask the language model a simple question and return the response."""
      6 from ._machinery import Context
----> 7 return generate_response(chat_history=chat_history,
      8                   image=image,
      9                   model=Context.model,
     10                   system_prompt="",
     11                   user_prompt=prompt,
     12                   vision_system_prompt="")

File [C:\structure\code\bia-bob\src\bia_bob\_utilities.py:83](file:///C:/structure/code/bia-bob/src/bia_bob/_utilities.py#line=82), in generate_response(chat_history, image, model, system_prompt, user_prompt, vision_system_prompt)
     78 if Context.endpoint is not None:
     79     full_response = generate_response_from_openai(model, system_prompt, user_prompt, chat_history, image,
     80                                                   base_url=Context.endpoint, api_key=Context.api_key,
     81                                                   vision_model=Context.vision_model,
     82                                                   vision_system_prompt=vision_system_prompt)
---> 83 elif "gpt-" in model:
     84     full_response = generate_response_from_openai(model, system_prompt, user_prompt, chat_history, image,
     85                                                   vision_model=Context.vision_model,
     86                                                   vision_system_prompt=vision_system_prompt)
     87 elif model == "gemini" or model == "gemini-pro" or model == "gemini-pro-vision":

TypeError: argument of type 'NoneType' is not iterable
haesleinhuepf commented 3 months ago

git-bob think about this

github-actions[bot] commented 3 months ago

This message was generated by git-bob (version: 0.2.3, model: claude-3-5-sonnet-20240620), an experimental AI-based assistant. It can make mistakes and has limitations. Check its messages carefully.

The issue appears to be caused by the Context.model being None when ask_llm is called. This happens because the LLM infrastructure is not initialized before the first call to ask_llm.

To fix this, we should modify the ask_llm function in _utilities.py to initialize the assistant if it hasn't been done already. Here's the proposed change:

def ask_llm(prompt, image=None, chat_history=[]):
    """Ask the language model a simple question and return the response."""
    from ._machinery import Context, init_assistant
    if Context.model is None:
        init_assistant()
    return generate_response(chat_history=chat_history,
                      image=image,
                      model=Context.model,
                      system_prompt="",
                      user_prompt=prompt,
                      vision_system_prompt="")

This modification ensures that the LLM infrastructure is initialized before attempting to generate a response, resolving the NoneType error.

haesleinhuepf commented 3 months ago

Awesome! git-bob implement exactly this!

haesleinhuepf commented 3 months ago

Awesome! git-bob implement exactly this!