SciPhi-AI / R2R

The Elasticsearch for RAG. Build, scale, and deploy state of the art Retrieval-Augmented Generation applications
https://r2r-docs.sciphi.ai/
MIT License
3.25k stars 238 forks source link

ollama | Name or service not known #749

Open NajiAboo opened 1 month ago

NajiAboo commented 1 month ago

Describe the bug I have tried with open ai and the r2r is working fine. But when i switch to ollama its not giving the answer, have added detailed errors.

To Reproduce Steps to reproduce the behavior:

  1. I have cloned the project git clone https://github.com/SciPhi-AI/R2R.git cd R2R pip install . docker-compose up -d

  2. Modified the compose.ollama.yaml little bit to run ( attached the yaml ) sudo docker-compose -f compose.ollama.yaml up

  3. Run following command docker exec -it r2r-ollama-1 ollama pull llama3 docker exec -it r2r-ollama-1 ollama pull mxbai-embed-large

  4. To test ollam run following command curl http://localhost:11434/api/chat -d '{ "model": "llama3", "messages": [ { "role": "user", "content": "why is the sky blue?" } ] } ' This give proper answer

  5. Now follow below code to test from r2r.main import R2RClient client = R2RClient("http://localhost:8000/") client.ingest_files(["test.txt"])

Getting below error

{'results': {'processed_documents': [], 'failed_documents': ["Document 'test.txt': Embedding generation failed for one or more embeddings."], 'skipped_documents': []}}

7.In the console, I can see below error

2024-07-23 05:51:11,549 - ERROR - r2r.providers.embeddings.ollama - Error getting embedding: [Errno -2] Name or service not known 2024-07-23 05:51:11,557 - ERROR - r2r.base.pipeline.base_pipeline - Pipeline failed with error: [Errno -2] Name or service not known 2024-07-23 05:51:11,558 - ERROR - r2r.base.pipeline.base_pipeline - Pipeline failed with error: [Errno -2] Name or service not known 2024-07-23 05:51:11,558 - ERROR - r2r.main.services.retrieval_service - Pipeline error: [Errno -2] Name or service not known INFO: 172.28.0.1:44808 - "POST /v1/rag HTTP/1.1" 500 Internal Server Error

Expected behavior As per the documentation it should work properly.

Screenshots If applicable, add screenshots to help explain your problem. But when I switched the environment CONFIG_NAME = default, Then it's working fine, But CONFIG_NAME = local_ollama is getting above error

Desktop (please complete the following information):

compose.ollama.zip

Additional context Add any other context about the problem here.

emrgnt-cmplxty commented 1 month ago

Thanks for sharing, could you try using the CLI to deploy your Docker system r2r --config-name=local_neo4j_kg serve --docker --docker-ext-neo4j --docker-ext-ollama ?

NajiAboo commented 1 month ago

I tried with below command sudo r2r --config-name=local_ollama serve --docker --docker-ext-ollama

and I set following env values export POSTGRES_USER=postgres export POSTGRES_PASSWORD=password export POSTGRES_HOST=localhost export POSTGRES_PORT=5432 export POSTGRES_DBNAME=postgres export POSTGRES_VECS_COLLECTION=o1ama1

Getting below error

[SQL: SELECT pg_advisory_unlock(123456789)] (Background on this error at: https://sqlalche.me/e/20/2j85) 2024-07-23 14:36:19,878 - INFO - r2r.main.app_entry - Environment CONFIG_NAME: local_ollama 2024-07-23 14:36:19,878 - INFO - r2r.main.app_entry - Environment CONFIG_PATH: 2024-07-23 14:36:19,878 - INFO - r2r.main.app_entry - Environment CLIENT_MODE: False 2024-07-23 14:36:19,878 - INFO - r2r.main.app_entry - Environment BASE_URL: None 2024-07-23 14:36:19,878 - INFO - r2r.main.app_entry - Environment PIPELINE_TYPE: qna 2024-07-23 14:36:19,879 - INFO - r2r.base.providers.prompt - Initializing PromptProvider with config extra_fields={} provider='r2r' default_system_name='default_system' default_task_name='default_rag' file_path=None. 2024-07-23 14:36:19,879 - INFO - r2r.base.providers.embedding - Initializing EmbeddingProvider with config extra_fields={'text_splitter': {'type': 'recursive_character', 'chunk_size': 512, 'chunk_overlap': 20}} provider='ollama' base_model='mxbai-embed-large' base_dimension=1024 rerank_model=None rerank_dimension=None rerank_transformer_type=None batch_size=32 prefixes=None add_title_as_prefix=True. 2024-07-23 14:36:19,879 - INFO - r2r.providers.embeddings.ollama - Using Ollama API base URL: http://host.docker.internal:11434 2024-07-23 14:36:19,914 - INFO - r2r.base.providers.llm - Initializing LLM provider with config: extra_fields={} provider='litellm' generation_config=GenerationConfig(model='ollama/llama3', temperature=0.1, top_p=1.0, top_k=100, max_tokens_to_sample=1024, stream=False, functions=None, skip_special_tokens=False, stop_token=None, num_beams=1, do_sample=True, generate_with_chat=False, add_generation_kwargs={}, api_base=None) concurrency_limit=1 max_retries=2 initial_backoff=1 max_backoff=60 2024-07-23 14:36:19,981 - INFO - r2r.base.providers.database - Initializing DatabaseProvider with config extra_fields={} provider='postgres'. 2024-07-23 14:36:19,981 - INFO - r2r.providers.database.postgres - Using TCP connection Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context self.dialect.do_execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute cursor.execute(statement, parameters) psycopg2.errors.SyntaxError: syntax error at or near "{" LINE 2: CREATE OR REPLACE FUNCTION hybridsearch${CONFIG_NA... ^

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/app/r2r/providers/database/postgres.py", line 198, in _create_hybrid_search_function sess.execute(text(hybrid_search_function)) File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2351, in execute return self._execute_internal( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2245, in _execute_internal result = conn.execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1418, in execute return meth( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection return connection._execute_clauseelement( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1640, in _execute_clauseelement ret = self._execute_context( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context return self._exec_single_context( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context self._handle_dbapi_exception( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2353, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context self.dialect.do_execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at or near "{" LINE 2: CREATE OR REPLACE FUNCTION hybridsearch${CONFIG_NA... ^

[SQL: CREATE OR REPLACE FUNCTION hybridsearch${CONFIG_NAME:-vecs}( query_text TEXT, query_embedding VECTOR(512), match_limit INT, full_text_weight FLOAT = 1, semantic_weight FLOAT = 1, rrf_k INT = 50, filter_condition JSONB = NULL ) RETURNS SETOF vecs."${CONFIG_NAME:-vecs}" LANGUAGE sql AS $$ WITH full_text AS ( SELECT id, ROW_NUMBER() OVER (ORDER BY ts_rank(to_tsvector('english', metadata->>'text'), websearch_to_tsquery(query_text)) DESC) AS rank_ix FROM vecs."${CONFIG_NAME:-vecs}" WHERE to_tsvector('english', metadata->>'text') @@ websearch_to_tsquery(query_text) AND (filter_condition IS NULL OR (metadata @> filter_condition)) ORDER BY rank_ix LIMIT LEAST(match_limit, 30) 2 ), semantic AS ( SELECT id, ROW_NUMBER() OVER (ORDER BY vec <#> query_embedding) AS rank_ix FROM vecs."${CONFIG_NAME:-vecs}" WHERE filter_condition IS NULL OR (metadata @> filter_condition) ORDER BY rank_ix LIMIT LEAST(match_limit, 30) 2 ) SELECT vecs."${CONFIG_NAME:-vecs}". FROM full_text FULL OUTER JOIN semantic ON full_text.id = semantic.id JOIN vecs."${CONFIG_NAME:-vecs}" ON vecs."${CONFIG_NAME:-vecs}".id = COALESCE(full_text.id, semantic.id) ORDER BY COALESCE(1.0 / (rrf_k + full_text.rank_ix), 0.0) full_text_weight + COALESCE(1.0 / (rrf_k + semantic.rank_ix), 0.0) * semantic_weight DESC LIMIT LEAST(match_limit, 30); $$; ] (Background on this error at: https://sqlalche.me/e/20/f405)

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context self.dialect.do_execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute cursor.execute(statement, parameters) psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/usr/local/bin/uvicorn", line 8, in sys.exit(main()) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1157, in call return self.main(args, kwargs) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1078, in main rv = self.invoke(ctx) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 1434, in invoke return ctx.invoke(self.callback, ctx.params) File "/usr/local/lib/python3.10/site-packages/click/core.py", line 783, in invoke return __callback(args, **kwargs) File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 418, in main run( File "/usr/local/lib/python3.10/site-packages/uvicorn/main.py", line 587, in run server.run() File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 62, in run return asyncio.run(self.serve(sockets=sockets)) File "/usr/local/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/local/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/usr/local/lib/python3.10/site-packages/uvicorn/server.py", line 69, in serve config.load() File "/usr/local/lib/python3.10/site-packages/uvicorn/config.py", line 458, in load self.loaded_app = import_from_string(self.app) File "/usr/local/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string module = importlib.import_module(module_str) File "/usr/local/lib/python3.10/importlib/init.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 1050, in _gcd_import File "", line 1027, in _find_and_load File "", line 1006, in _find_and_load_unlocked File "", line 688, in _load_unlocked File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "/app/r2r/main/app_entry.py", line 78, in app = r2r_app( File "/app/r2r/main/app_entry.py", line 50, in r2r_app wrapper = R2RExecutionWrapper( File "/app/r2r/main/execution.py", line 56, in init self.app = R2R(config=config) File "/app/r2r/main/r2r.py", line 36, in init built = builder.build() File "/app/r2r/main/assembly/builder.py", line 214, in build providers = provider_factory(self.config).create_providers( File "/app/r2r/main/assembly/factory.py", line 259, in create_providers or self.create_database_provider( File "/app/r2r/main/assembly/factory.py", line 98, in create_database_provider database_provider = PostgresDBProvider( File "/app/r2r/providers/database/postgres.py", line 1126, in init super().init(config) File "/app/r2r/base/providers/database.py", line 198, in init self.vector: VectorDatabaseProvider = self._initialize_vector_db() File "/app/r2r/providers/database/postgres.py", line 1129, in _initialize_vector_db return PostgresVectorDBProvider( File "/app/r2r/providers/database/postgres.py", line 132, in init self._create_hybrid_search_function() File "/app/r2r/providers/database/postgres.py", line 202, in _create_hybrid_search_function sess.execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2351, in execute return self._execute_internal( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2245, in _execute_internal result = conn.execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1418, in execute return meth( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/sql/elements.py", line 515, in _execute_on_connection return connection._execute_clauseelement( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1640, in _execute_clauseelement ret = self._execute_context( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1846, in _execute_context return self._exec_single_context( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1986, in _exec_single_context self._handle_dbapi_exception( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2353, in _handle_dbapi_exception raise sqlalchemy_exception.with_traceback(exc_info[2]) from e File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1967, in _exec_single_context self.dialect.do_execute( File "/usr/local/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 924, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.InternalError: (psycopg2.errors.InFailedSqlTransaction) current transaction is aborted, commands ignored until end of transaction block

[SQL: SELECT pg_advisory_unlock(123456789)] (Background on this error at: https://sqlalche.me/e/20/2j85)

emrgnt-cmplxty commented 1 month ago

Hi,

You do NOT need to set POSTGRES environment variables. The docker is configured to handle these automatically, so this might be the source of your issue. If you are wishing to connect with a local postgres instance running outside of your Docker, then the approach you are attempting above is probably incorrect, as you would need to tell Docker to point outside the internal Docker network.

Let us know if this resolve things, if not, could you try cleaning your docker images / containers with r2r docker-down and perhaps even docker system prune. Afterwards, you might have more luck.