Closed sigren closed 2 months ago
Same here.
Hi @sigren and @renatocaliari ! Can you please share your Assistant config
Hi @sigren and @renatocaliari ! Can you please share your Assistant config
assistant = Assistant(
llm=Groq(model="llama3-groq-8b-8192-tool-use-preview", api_key=groq_api_key),
description="[anything]",
output_model=Conversation,
tools=[Newspaper4k()], debug_mode=True, show_tool_calls=True)
assistant.print_response("any_url", markdown=True)
error:
groq.BadRequestError: Error code: 400 - {'error': {'message': 'response_format` json_object cannot be combined with tool/function calling', 'type': 'invalid_request_error'}}
Hey @renatocaliari and @sigren! I tested out the llama3-groq-8b-8192-tool-use-preview
model, and it worked fine with the following code snippets:
from phi.assistant import Assistant
from phi.llm.groq import Groq
from phi.tools.yfinance import YFinanceTools
assistant = Assistant(
llm=Groq(model="llama3-groq-8b-8192-tool-use-preview"),
tools=[YFinanceTools(stock_price=True, analyst_recommendations=True, company_info=True, company_news=True)],
debug_mode=True,
show_tool_calls=True,
markdown=True,
)
assistant.print_response("What is the stock price of NVDA")
assistant.print_response("Write a comparison between NVDA and AMD, use all tools available.")
and
from phi.assistant import Assistant
from phi.llm.groq import Groq
from phi.tools.newspaper4k import Newspaper4k
assistant = Assistant(
llm=Groq(model="llama3-groq-8b-8192-tool-use-preview"),
description="You can summarize newsletters",
tools=[Newspaper4k()],
debug_mode=True,
show_tool_calls=True,
)
assistant.print_response("Can you say what tools you have access to and summarize this url? https://console.groq.com/docs/tool-use. Your response format must be json", markdown=True)
This only fails when you specify an output_model
, which sets the response_format
to json_object
here https://github.com/phidatahq/phidata/blob/7d694204c432e6d1ac89d9e5afdeea02e7a989d7/phi/assistant/assistant.py#L281-L282
Groq sepcifies in the response that "response_format: json_object cannot be combined with tool/function calling". To fix this I have made this PR #1070 that ignores the response_format
and sets it to text
when using tools with groq.
I'm getting the error ERROR 2 validation errors for save_to_file_and_run
multiple times with different temperature using Groq Llama-3 (different variations), using the IMDB example:
from phi.llm.openai import OpenAIChat
from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile
python_assistant = PythonAssistant(
llm=OpenAIChat(model='llama-3.1-70b-versatile', max_tokens=4000, temperature=0.0),
files=[
CsvFile(
path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
description="Contains information about movies from IMDB.",
)
],
pip_install=True,
show_tool_calls=True,
)
python_assistant.print_response("What is the average rating of movies?", markdown=True)
P.S. The same code was working in some previous versions on colab, trying to download the file manually didn't make any difference.
<class 'groq.BadRequestError'> Error code: 400 - {'error': {'message': 'response_format' json_object cannot be combined with tool/function calling', 'type': 'invalid_request_error'}}