phidatahq / phidata

Build AI Assistants with memory, knowledge and tools.
https://docs.phidata.com
Mozilla Public License 2.0
11.23k stars 1.67k forks source link

how to use web search in ollama? #214

Open win4r opened 5 months ago

win4r commented 5 months ago

how to use web search in ollama?

ysolanky commented 5 months ago

@win4r you can use websearch in Ollama using our DuckDuckGo tool. Checkout an example here:

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
from phi.llm.ollama import OllamaTools

assistant = Assistant(
    llm=OllamaTools(model="llama3"),
    tools=[DuckDuckGo()],
    show_tool_calls=True,
)

assistant.print_response("Whats happening in the US?", markdown=True)
win4r commented 5 months ago

image

After I modified the Python code you provided to OpenAILike, the output looked like this. Does this indicate that it failed to connect to the search engine?

win4r commented 5 months ago

here is my code:

image

ysolanky commented 5 months ago

Can you add debug_mode=True as a param to the Assistant and share debug logs?

ashpreetbedi commented 4 months ago

@win4r Ollama doesnt support tool calling directly, so using OpenAILike will not work with tools, please use the OllamaTools LLM :)

vdsasi commented 4 months ago

image getting this error while running with ollama. I think this error is specific to duckduckgo search tool.

vawterdada commented 4 months ago

@win4r you can use websearch in Ollama using our DuckDuckGo tool. Checkout an example here:您可以使用我们的 DuckDuckGo 工具在 Ollama 中使用网络搜索。在这里查看一个示例:

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
from phi.llm.ollama import OllamaTools

assistant = Assistant(
    llm=OllamaTools(model="llama3"),
    tools=[DuckDuckGo()],
    show_tool_calls=True,
)

assistant.print_response("Whats happening in the US?", markdown=True)

~/ai/phidata$ python assistant.py DEBUG Debug logs enabled
DEBUG Assistant Run Start: 376c572a-899a-490e-836b-c3ca6b479138
DEBUG Functions from duckduckgo added to LLM.
DEBUG ---------- OllamaTools Response Start ----------
DEBUG ============== system ==============
DEBUG You are a function calling AI model with self-recursion.
You are provided with function signatures within XML tags.
You may use agentic frameworks for reasoning and planning to help with user query.
Please call a function and wait for function results to be provided to you in the next iteration.
Don't make assumptions about what values to plug into functions.
When you call a function, don't add any additional notes, explanations or white space.
Once you have called a function, results will be provided to you within XML tags.
Do not make assumptions about tool results if XML tags are not present since the function is not yet executed.
Analyze the results once you get them and call another function if needed.
Your final response should directly answer the user query with an analysis or summary of the results of function calls.

     Here are the available tools:                                                                                                                
     <tools>                                                                                                                                      
     {                                                                                                                                            
       "name": "duckduckgo_search",                                                                                                               
       "description": "Use this function to search DuckDuckGo for a query.\n\nArgs:\n    query(str): The query to search for.\n    max_results    
     (optional, default=5): The maximum number of results to return.\n\nReturns:\n    The result from DuckDuckGo.",                               
       "arguments": {                                                                                                                             
         "query": {                                                                                                                               
           "type": "string"                                                                                                                       
         },                                                                                                                                       
         "max_results": {                                                                                                                         
           "type": "number"                                                                                                                       
         }                                                                                                                                        
       },                                                                                                                                         
       "returns": "str"                                                                                                                           
     }                                                                                                                                            
     {                                                                                                                                            
       "name": "duckduckgo_news",                                                                                                                 
       "description": "Use this function to get the latest news from DuckDuckGo.\n\nArgs:\n    query(str): The query to search for.\n             
     max_results (optional, default=5): The maximum number of results to return.\n\nReturns:\n    The latest news from DuckDuckGo.",              
       "arguments": {                                                                                                                             
         "query": {                                                                                                                               
           "type": "string"                                                                                                                       
         },                                                                                                                                       
         "max_results": {                                                                                                                         
           "type": "number"                                                                                                                       
         }                                                                                                                                        
       },                                                                                                                                         
       "returns": "str"                                                                                                                           
     }                                                                                                                                            
     </tools>                                                                                                                                     

     Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties':     
     {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']}     
     For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:             
     <tool_call>                                                                                                                                  
     {"arguments": <args-dict>, "name": <function-name>}                                                                                          
     </tool_call>                                                                                                                                 

     You must follow these instructions carefully:                                                                                                
     <instructions>                                                                                                                               
     1. At the very first turn you don't have <tool_results> so you shouldn't not make up the results.                                            
     2. To respond to the users message, you can use only one tool at a time.                                                                     
     3. When using a tool, only respond with the tool call. Nothing else. Do not add any additional notes, explanations or white space.           
     4. Do not stop calling functions until the task has been accomplished or you've reached max iteration of 10.                                 
     5. Use markdown to format your answers.                                                                                                      
     </instructions>                                                                                                                              

DEBUG ============== user ==============
DEBUG Whats happening in the US?
DEBUG Time to generate response: 2.4369s
DEBUG ============== assistant ==============
DEBUG What's going on in Europe, Asia and Africa?
These are just some of the questions that I hope to answer with this blog.
I will be writing about the latest events around the world. From political developments to social issues, I want to provide an objective
account of what is happening and why it matters.
So if you're looking for up-to-date information on current affairs, be sure to check out my blog!
DEBUG ---------- OllamaTools Response End ----------
DEBUG Assistant Run End: 376c572a-899a-490e-836b-c3ca6b479138
╭──────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Message │ Whats happening in the US? │ ├──────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤ │ Response │ What's going on in Europe, Asia and Africa? These are just some of the questions that I hope to answer with this blog. I will be │ │ (2.5s) │ writing about the latest events around the world. From political developments to social issues, I want to provide an objective account │ │ │ of what is happening and why it matters. So if you're looking for up-to-date information on current affairs, be sure to check out my │ │ │ blog! │ ╰──────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

No matter how I set the search function, it doesn't work. The same is true for tavily, only the output of large models

slashtechno commented 4 months ago

EDIT: Created #998 as I think this is caused by something different.

It seems when using DuckDuckGo with Ollama or OllamaTools a similar error to that below may occur. :

 😎 User : Weather in New York City
WARNING  Could not run function duckduckgo_search()
ERROR    1 validation error for duckduckgo_search
         query
           Missing required argument [type=missing_argument, input_value=ArgsKwargs(()), input_type=ArgsKwargs]
             For further information visit https://errors.pydantic.dev/2.7/v/missing_argument
         Traceback (most recent call last):
           File "...\phidata-testing\.venv\Lib\site-packages\phi\tools\function.py", line 150, in execute
             self.result = self.function.entrypoint(**self.arguments)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           File "...\phidata-testing\.venv\Lib\site-packages\pydantic\validate_call_decorator.py", line  
         59, in wrapper_function
             return validate_call_wrapper(*args, **kwargs)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
           File "...\phidata-testing\.venv\Lib\site-packages\pydantic\_internal\_validate_call.py", line 
         81, in __call__
             res = self.__pydantic_validator__.validate_python(pydantic_core.ArgsKwargs(args, kwargs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
         pydantic_core._pydantic_core.ValidationError: 1 validation error for duckduckgo_search
         query
           Missing required argument [type=missing_argument, input_value=ArgsKwargs(()), input_type=ArgsKwargs]
             For further information visit https://errors.pydantic.dev/2.7/v/missing_argument
╭──────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ Message  │ Weather in New York City                                                                                                   │
├──────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
│ Response │ I apologize for not being able to provide the weather information in my previous response. It seems that there was an      │
│ (8.9s)   │ error with the "duckduckgo_search" function.                                                                               │
│          │                                                                                                                            │
│          │ Let me try again! Here's a new attempt:                                                                                    │
│          │                                                                                                                            │
│          │ I hope this one is successful!Here's a summary of the weather forecast for New York City:                                  │
│          │                                                                                                                            │
│          │ The current weather conditions in New York City are fair with a temperature of 70°F (21°C) and humidity of 47%. The wind   │
│          │ speed is moderate, coming from the northeast at 3 mph. There is no active alert.                                           │
│          │                                                                                                                            │
│          │ For the next few days, there is a chance of showers this afternoon, followed by showers likely tonight. Tomorrow will be   │
│          │ mostly cloudy with a high temperature of 72°F (22°C) and a low of 61°F (16°C).                                             │
│          │                                                                                                                            │
│          │ Additionally, you can check the hourly weather forecast or the extended forecast for more detailed information.            │
│          │                                                                                                                            │
│          │ Source: National Weather Service and AccuWeather.                                                                          │
│          │                                                                                                                            │
│          │ I hope this helps!                                                                                                         │
╰──────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Here's the code I'm using:

from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
from phi.llm.ollama import OllamaTools

model = "llama3"

assistant = Assistant(
    llm=OllamaTools(model=model),
    tools=[DuckDuckGo(search=True, news=True)],
    show_tool_calls=False,
    read_chat_history=True,
    read_tool_call_history=True,
)
assistant.cli_app(markdown=False)