SciPhi-AI / R2R

The all-in-one solution for RAG. Build, scale, and deploy state of the art Retrieval-Augmented Generation applications
https://r2r-docs.sciphi.ai/
MIT License
3.54k stars 265 forks source link

ValueError: Embedding provider ollama not supported #1525

Closed giacomo-zema closed 38 minutes ago

giacomo-zema commented 1 week ago

Describe the bug I am running the local llm config with ollama and the r2r container keeps restarting. the terminal window from which I launched the application waiting for all services to get healthy. The containers can be seen restarting repeatedly with docker ps. docker logs container_id reveals that it does not recognize ollama as an embedding provider

To Reproduce

  1. install ollama + pull llama-3.1 and mxbai-embed-large + ollama serve
  2. pip install r2r
  3. add /home/user/.local/bin to path
  4. r2r serve --docker --config-name=local_llm
  5. docker logs container_id

Expected behavior ERROR: Application startup failed. Exiting. 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_CONFIG_NAME: local_llm 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_CONFIG_PATH: 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_PROJECT_NAME: r2r_default 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_HOST: postgres 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_DBNAME: postgres 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_PORT: 5432 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_PASSWORD: postgres 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_PROJECT_NAME: None INFO: Started server process [7] INFO: Waiting for application startup. 2024-10-29 22:19:55,508 - ERROR - root - Error creating providers, pipes, or pipelines: Embedding provider ollama not supported ERROR: Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan async with self.lifespan_context(app) as maybe_state: ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/contextlib.py", line 210, in aenter return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/app_entry.py", line 22, in lifespan r2r_app = await create_r2r_app( ^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/app_entry.py", line 62, in create_r2r_app return await builder.build() ^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/builder.py", line 196, in build providers = await self._create_providers( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/builder.py", line 144, in _create_providers return await factory.create_providers(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/factory.py", line 236, in create_providers or self.create_embedding_provider( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/factory.py", line 188, in create_embedding_provider raise ValueError( ValueError: Embedding provider ollama not supported

ERROR: Application startup failed. Exiting. Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context It does the same if I feed it the example config as the local llm guide on the r2r website here

romefort commented 1 week ago

Same issue here on MacOS

emrgnt-cmplxty commented 38 minutes ago

This was resolved in a recent release, thanks for flagging!