assafelovic / gpt-researcher

LLM based autonomous agent that does online comprehensive research on any given topic
https://gptr.dev
Apache License 2.0
14.25k stars 1.86k forks source link

Adding support for Ollama (both LLM and embeddings) #527

Closed gschmutz closed 4 months ago

gschmutz commented 4 months ago

Hi

This pull request implements the OllamaProvider class, following the coding patterns defined in OpenAIProvider.

I have tested the changes using the local client and the examples/sample_report.py program with simple queries and it works with the LLama3 model.

Thanks, Guido

gschmutz commented 4 months ago

I have changed the prompt in the same way as in this PR: https://github.com/assafelovic/gpt-researcher/pull/526

assafelovic commented 4 months ago

I've went ahead and tested and verified it all works. Merging and releasing tomorrow! @gschmutz

arsaboo commented 4 months ago

@assafelovic Can you confirm your config?

I tried with the following config (note that OPENAI_API_KEY is not optional):

LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://192.168.2.162:11434 #also tried with HTTP://host.docker.internal:11434
OPENAI_API_KEY=OLLAMA
EMBEDDING_PROVIDER=ollama
FAST_LLM_MODEL=llama3:8b-instruct-q5_0
SMART_LLM_MODEL=llama3-chatqa:70b
OLLAMA_EMBEDDING_MODEL=snowflake-arctic-embed:l

But, I get the following error:


ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
    await func(session)
  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
    await dependant.call(**values)
  File "/usr/src/app/backend/server.py", line 53, in websocket_endpoint
    report = await manager.start_streaming(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 57, in start_streaming
    report = await run_agent(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 75, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/report_type/basic_report/basic_report.py", line 18, in run
    await researcher.conduct_research()
  File "/usr/src/app/gpt_researcher/master/agent.py", line 94, in conduct_research
    context = await self.get_context_by_search(self.query)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/agent.py", line 161, in get_context_by_search
    sub_queries = await get_sub_queries(query, self.role, self.cfg, self.parent_query, self.report_type)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/functions.py", line 95, in get_sub_queries
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/utils/llm.py", line 76, in create_chat_completion
    response = await provider.get_chat_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/llm_provider/openai/openai.py", line 61, in get_chat_response
    output = await self.llm.ainvoke(messages)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 191, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 609, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 569, in agenerate
    raise exceptions[0]
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 754, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 623, in _agenerate
    response = await self.async_client.create(messages=message_dicts, **params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: OLLAMA. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}```
gschmutz commented 4 months ago

I've went ahead and tested and verified it all works. Merging and releasing tomorrow! @gschmutz

Thanks a lot for resolving the conflicts and merging it :-)

gschmutz commented 4 months ago

@assafelovic Can you confirm your config?

I tried with the following config (note that OPENAI_API_KEY is not optional):

LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://192.168.2.162:11434 #also tried with HTTP://host.docker.internal:11434
OPENAI_API_KEY=OLLAMA
EMBEDDING_PROVIDER=ollama
FAST_LLM_MODEL=llama3:8b-instruct-q5_0
SMART_LLM_MODEL=llama3-chatqa:70b
OLLAMA_EMBEDDING_MODEL=snowflake-arctic-embed:l

But, I get the following error:

ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
    await func(session)
  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
    await dependant.call(**values)
  File "/usr/src/app/backend/server.py", line 53, in websocket_endpoint
    report = await manager.start_streaming(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 57, in start_streaming
    report = await run_agent(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 75, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/report_type/basic_report/basic_report.py", line 18, in run
    await researcher.conduct_research()
  File "/usr/src/app/gpt_researcher/master/agent.py", line 94, in conduct_research
    context = await self.get_context_by_search(self.query)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/agent.py", line 161, in get_context_by_search
    sub_queries = await get_sub_queries(query, self.role, self.cfg, self.parent_query, self.report_type)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/functions.py", line 95, in get_sub_queries
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/utils/llm.py", line 76, in create_chat_completion
    response = await provider.get_chat_response(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/llm_provider/openai/openai.py", line 61, in get_chat_response
    output = await self.llm.ainvoke(messages)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 191, in ainvoke
    llm_result = await self.agenerate_prompt(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 609, in agenerate_prompt
    return await self.agenerate(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 569, in agenerate
    raise exceptions[0]
  File "/usr/local/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 754, in _agenerate_with_cache
    result = await self._agenerate(
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 623, in _agenerate
    response = await self.async_client.create(messages=message_dicts, **params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1181, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1790, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1493, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: OLLAMA. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}```

@arsaboo will test it with your settings today ....

gschmutz commented 4 months ago
  File "/usr/src/app/gpt_researcher/llm_provider/openai/openai.py", line 61, in get_chat_response
    output = await self.llm.ainvoke(messages)

Hi @arsaboo I have tested with the latest code base and it works for me. In your setup it does not look like you are using the Ollama implementation, as you can see in the trace stack with the following line (it is using openai.py and not ollama.py)

  File "/usr/src/app/gpt_researcher/llm_provider/openai/openai.py", line 61, in get_chat_response
    output = await self.llm.ainvoke(messages)

Do you run in a container or directly on your machine? Windows, Mac or Linux?

I have tested it locally on my Mac with Ollama also installed directly on the Mac. These are the environment variables (there is no need to set OPENAI_API_KEY):

export LLM_PROVIDER=ollama
export EMBEDDING_PROVIDER=ollama
export OLLAMA_BASE_URL=http://192.168.1.102:11434

export FAST_LLM_MODEL=llama3
export SMART_LLM_MODEL=llama3

export OLLAMA_EMBEDDING_MODEL=mxbai-embed-large

export TAVILY_API_KEY=tvly-XXXX

Run with app

uvicorn main:app --reload

or with the sample_report.py

python examples/sample_report.py
gschmutz commented 4 months ago

@assafelovic I think I could have done better with some meaningful default settings for the Ollama specific environment variables if they are not specified. Will do another PR for that.

arsaboo commented 4 months ago

I'm using the docker container on Mac OS. Any changes required to the docker compose? I'm using the default one in the repo.

arsaboo commented 4 months ago

Here's the docker compose:

version: '3'
services:
  gpt-researcher:
    image: kramer1346/gpt-researcher
    build: ./
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      TAVILY_API_KEY: ${TAVILY_API_KEY}
      LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
      OPENAI_BASE_URL: ${OPENAI_BASE_URL}
    ports:
      - 8001:8000
    restart: always
    extra_hosts:
      - "host.docker.internal:host-gateway"

Here's my .env file

TAVILY_API_KEY=REDACTED
LANGCHAIN_API_KEY=REDACTED

# Ollama Config
LLM_PROVIDER=ollama
OLLAMA_BASE_URL=http://host.docker.internal:11434/
OPENAI_API_KEY=OLLAMA
EMBEDDING_PROVIDER=ollama
FAST_LLM_MODEL=llama3:8b-instruct-q5_0
SMART_LLM_MODEL=llama3-chatqa:70b
OLLAMA_EMBEDDING_MODEL=snowflake-arctic-embed:l

DOC_PATH=./my-docs
arsaboo commented 4 months ago

@gschmutz I updated my docker compose file to include the additional env variables:

services:
  gpt-researcher:
    image: kramer1346/gpt-researcher
    build: ./
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      TAVILY_API_KEY: ${TAVILY_API_KEY}
      LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
      OPENAI_BASE_URL: ${OPENAI_BASE_URL}
      LLM_PROVIDER: ${LLM_PROVIDER}
      OLLAMA_BASE_URL: ${OLLAMA_BASE_URL}
      EMBEDDING_PROVIDER: ${EMBEDDING_PROVIDER}
      FAST_LLM_MODEL: ${FAST_LLM_MODEL}
      SMART_LLM_MODEL: ${SMART_LLM_MODEL}
      OLLAMA_EMBEDDING_MODEL: ${OLLAMA_EMBEDDING_MODEL}
    ports:
      - 8001:8000
    restart: always
    extra_hosts:
      - "host.docker.internal:host-gateway"

And now I am seeing the following error (the models are available on my Ollama instance):

/usr/local/lib/python3.11/site-packages/langchain/_api/module_import.py:120: LangChainDeprecationWarning: Importing OllamaEmbeddings from langchain.embeddings is deprecated. Please replace deprecated imports:
>> from langchain.embeddings import OllamaEmbeddings
with new imports of:
>> from langchain_community.embeddings import OllamaEmbeddings
  warn_deprecated(
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
    await func(session)
  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
    await dependant.call(**values)
  File "/usr/src/app/backend/server.py", line 53, in websocket_endpoint
    report = await manager.start_streaming(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 57, in start_streaming
    report = await run_agent(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 75, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/report_type/basic_report/basic_report.py", line 18, in run
    await researcher.conduct_research()
  File "/usr/src/app/gpt_researcher/master/agent.py", line 94, in conduct_research
    context = await self.get_context_by_search(self.query)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/agent.py", line 161, in get_context_by_search
    sub_queries = await get_sub_queries(query, self.role, self.cfg, self.parent_query, self.report_type)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/functions.py", line 95, in get_sub_queries
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/utils/llm.py", line 67, in create_chat_completion
    ProviderClass = get_provider(llm_provider)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/utils/llm.py", line 32, in get_provider
    raise Exception("LLM provider not found.")
Exception: LLM provider not found.
gschmutz commented 4 months ago

@gschmutz I updated my docker compose file to include the additional env variables:

services:
  gpt-researcher:
    image: kramer1346/gpt-researcher
    build: ./
    environment:
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      TAVILY_API_KEY: ${TAVILY_API_KEY}
      LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY}
      OPENAI_BASE_URL: ${OPENAI_BASE_URL}
      LLM_PROVIDER: ${LLM_PROVIDER}
      OLLAMA_BASE_URL: ${OLLAMA_BASE_URL}
      EMBEDDING_PROVIDER: ${EMBEDDING_PROVIDER}
      FAST_LLM_MODEL: ${FAST_LLM_MODEL}
      SMART_LLM_MODEL: ${SMART_LLM_MODEL}
      OLLAMA_EMBEDDING_MODEL: ${OLLAMA_EMBEDDING_MODEL}
    ports:
      - 8001:8000
    restart: always
    extra_hosts:
      - "host.docker.internal:host-gateway"

And now I am seeing the following error (the models are available on my Ollama instance):

/usr/local/lib/python3.11/site-packages/langchain/_api/module_import.py:120: LangChainDeprecationWarning: Importing OllamaEmbeddings from langchain.embeddings is deprecated. Please replace deprecated imports:
>> from langchain.embeddings import OllamaEmbeddings
with new imports of:
>> from langchain_community.embeddings import OllamaEmbeddings
  warn_deprecated(
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 240, in run_asgi
    result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 151, in __call__
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 373, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 96, in app
    await wrap_app_handling_exceptions(app, session)(scope, receive, send)
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 94, in app
    await func(session)
  File "/usr/local/lib/python3.11/site-packages/fastapi/routing.py", line 348, in app
    await dependant.call(**values)
  File "/usr/src/app/backend/server.py", line 53, in websocket_endpoint
    report = await manager.start_streaming(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 57, in start_streaming
    report = await run_agent(task, report_type, report_source, websocket)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/websocket_manager.py", line 75, in run_agent
    report = await researcher.run()
             ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/backend/report_type/basic_report/basic_report.py", line 18, in run
    await researcher.conduct_research()
  File "/usr/src/app/gpt_researcher/master/agent.py", line 94, in conduct_research
    context = await self.get_context_by_search(self.query)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/agent.py", line 161, in get_context_by_search
    sub_queries = await get_sub_queries(query, self.role, self.cfg, self.parent_query, self.report_type)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/master/functions.py", line 95, in get_sub_queries
    response = await create_chat_completion(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/utils/llm.py", line 67, in create_chat_completion
    ProviderClass = get_provider(llm_provider)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/app/gpt_researcher/utils/llm.py", line 32, in get_provider
    raise Exception("LLM provider not found.")
Exception: LLM provider not found.

@arsaboo: The docker image in the repo does not yet contain the changes for Ollama. The PR has just been merged yesterday. That's why you get the "LLM provider not found". So if you want to use it containerized, then you have to build the image first from the latest source code.

arsaboo commented 4 months ago

Yes, I'm building the images myself using the docker compose up command. That's how I've used it in the past

gschmutz commented 4 months ago

Yes, I'm building the images myself using the docker compose up command. That's how I've used it in the past

@arsaboo ok, last week I had it also running with my own docker image (did not know about the "official" one). Let me retest it this evening.

arsaboo commented 4 months ago

@gschmutz Just so you know, I have Ollama running on the Mac as a regular app and GPT Researcher on the same Mac in a docker container.

arsaboo commented 4 months ago

@gschmutz any luck?