Describe the bug
I am running the local llm config with ollama and the r2r container keeps restarting. the terminal window from which I launched the application waiting for all services to get healthy. The containers can be seen restarting repeatedly with docker ps. docker logs container_id reveals that it does not recognize ollama as an embedding provider
To Reproduce
install ollama + pull llama-3.1 and mxbai-embed-large + ollama serve
pip install r2r
add /home/user/.local/bin to path
r2r serve --docker --config-name=local_llm
docker logs container_id
Expected behavior
ERROR: Application startup failed. Exiting.
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_CONFIG_NAME: local_llm
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_CONFIG_PATH:
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_PROJECT_NAME: r2r_default
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_HOST: postgres
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_DBNAME: postgres
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_PORT: 5432
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_PASSWORD: postgres
2024-10-29 22:19:55,505 - INFO - root - Environment R2R_PROJECT_NAME: None
INFO: Started server process [7]
INFO: Waiting for application startup.
2024-10-29 22:19:55,508 - ERROR - root - Error creating providers, pipes, or pipelines: Embedding provider ollama not supported
ERROR: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/app/core/main/app_entry.py", line 22, in lifespan
r2r_app = await create_r2r_app(
^^^^^^^^^^^^^^^^^^^^^
File "/app/core/main/app_entry.py", line 62, in create_r2r_app
return await builder.build()
^^^^^^^^^^^^^^^^^^^^^
File "/app/core/main/assembly/builder.py", line 196, in build
providers = await self._create_providers(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/core/main/assembly/builder.py", line 144, in _create_providers
return await factory.create_providers(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/core/main/assembly/factory.py", line 236, in create_providers
or self.create_embedding_provider(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/core/main/assembly/factory.py", line 188, in create_embedding_provider
raise ValueError(
ValueError: Embedding provider ollama not supported
ERROR: Application startup failed. Exiting.
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
OS: ubuntu 22.04
Additional context
It does the same if I feed it the example config as the local llm guide on the r2r website here
Describe the bug I am running the local llm config with ollama and the r2r container keeps restarting. the terminal window from which I launched the application waiting for all services to get healthy. The containers can be seen restarting repeatedly with docker ps. docker logs container_id reveals that it does not recognize ollama as an embedding provider
To Reproduce
Expected behavior ERROR: Application startup failed. Exiting. 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_CONFIG_NAME: local_llm 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_CONFIG_PATH: 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_PROJECT_NAME: r2r_default 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_HOST: postgres 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_DBNAME: postgres 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_PORT: 5432 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_POSTGRES_PASSWORD: postgres 2024-10-29 22:19:55,505 - INFO - root - Environment R2R_PROJECT_NAME: None INFO: Started server process [7] INFO: Waiting for application startup. 2024-10-29 22:19:55,508 - ERROR - root - Error creating providers, pipes, or pipelines: Embedding provider ollama not supported ERROR: Traceback (most recent call last): File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 693, in lifespan async with self.lifespan_context(app) as maybe_state: ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/contextlib.py", line 210, in aenter return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/app_entry.py", line 22, in lifespan r2r_app = await create_r2r_app( ^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/app_entry.py", line 62, in create_r2r_app return await builder.build() ^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/builder.py", line 196, in build providers = await self._create_providers( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/builder.py", line 144, in _create_providers return await factory.create_providers(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/factory.py", line 236, in create_providers or self.create_embedding_provider( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/core/main/assembly/factory.py", line 188, in create_embedding_provider raise ValueError( ValueError: Embedding provider ollama not supported
ERROR: Application startup failed. Exiting. Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
Additional context It does the same if I feed it the example config as the local llm guide on the r2r website here