Closed haesleinhuepf closed 3 months ago
git-bob think about this
This message was generated by git-bob (version: 0.2.3, model: claude-3-5-sonnet-20240620), an experimental AI-based assistant. It can make mistakes and has limitations. Check its messages carefully.
The issue appears to be caused by the Context.model
being None
when ask_llm
is called. This happens because the LLM infrastructure is not initialized before the first call to ask_llm
.
To fix this, we should modify the ask_llm
function in _utilities.py
to initialize the assistant if it hasn't been done already. Here's the proposed change:
def ask_llm(prompt, image=None, chat_history=[]):
"""Ask the language model a simple question and return the response."""
from ._machinery import Context, init_assistant
if Context.model is None:
init_assistant()
return generate_response(chat_history=chat_history,
image=image,
model=Context.model,
system_prompt="",
user_prompt=prompt,
vision_system_prompt="")
This modification ensures that the LLM infrastructure is initialized before attempting to generate a response, resolving the NoneType
error.
Awesome! git-bob implement exactly this!
Awesome! git-bob implement exactly this!
When calling ask_llm first, the underlying infrstructure is not initialized:
Error: