Canner / WrenAI

🚀 Open-source SQL AI Agent for Text-to-SQL. Make Text2SQL Easy! 🙌
https://www.getwren.ai/oss
GNU Affero General Public License v3.0
1.67k stars 146 forks source link

After successfully importing data and forming relationships in Wren AI, I encounter two errors: "failed to create asking task" and "failed to deploy changes". These errors prevent me from proceeding with using the imported data. #516

Open khyatimorparia opened 1 month ago

khyatimorparia commented 1 month ago

After successfully importing data and forming relationships in Wren AI, I encounter two errors: "failed to create asking task" and "failed to deploy changes". These errors prevent me from proceeding with using the imported data.

To Reproduce Steps to reproduce the behavior: Import data into Wren AI (successfully completed) Form relationships between tables (successfully completed) Attempt to deploy changes See error "failed to deploy changes" Attempt to create an asking task See error "failed to create asking task"

Expected behavior After importing data and forming relationships, I expect to be able to deploy changes successfully and create asking tasks without errors.

Screenshots:

Screenshot 2024-07-14 at 8 41 11 PM

Container Logs Forcing deployment: {'errors': [{'locations': [{'line': 1, 'column': 36}], 'path': ['deploy'],'message': 'No project found', 'extensions': {'code': 'INTERNAL_SERVER_ERROR','message': 'No project found','shortMessage': 'Internal server error'}}], 'data': None}

OS: macOS Browser: Chrome Wren AI Information Version: 0.7.2 LLM_PROVIDER: azure_openai_llm GENERATION_MODEL: gpt-4-32k

Additional context The data import process and relationship formation were successful. The system is able to initialize and connect to Azure OpenAI services. The error "No project found" appears in the logs, which might be related to the deployment failure. The system is using DuckDB for data storage and querying. The Parquet files are successfully loaded into the system.

cyyeh commented 1 month ago

@khyatimorparia could you share log files to us?

khyatimorparia commented 1 month ago

Hi! I am getting these error logs. I tried to upgrade my certificate but still got the same error for 2 days. It would be great if you could help me. Thanks.

@.*** ~ % docker-compose -f ~/.wrenai/docker-compose.yaml up -d

WARN[0000] /Users/kmorparia/.wrenai/docker-compose.yaml: version is obsolete

[+] Running 1/1

✔ wren-ai-service Pulled

                                0.9s

[+] Running 8/8

✔ Network wren_default Created

                                0.0s

✔ Network wren_wren Created

                                0.0s

✔ Container wren-bootstrap-1 Started

                                0.2s

✔ Container wren-qdrant-1 Started

                                0.2s

✔ Container wren-ibis-server-1 Started

                                0.2s

✔ Container wren-wren-engine-1 Started

                                0.3s

✔ Container wren-wren-ai-service-1 Started

                                0.4s

✔ Container wren-wren-ui-1 Started

                                0.4s

@.*** ~ % docker logs wren-wren-ai-service-1

Waiting for wren-ai-service to start...

INFO: Started server process [7]

INFO: Waiting for application startup.

2024-07-15 21:50:32,718 - wren-ai-service - INFO - Initializing providers... (utils.py:64)

2024-07-15 21:50:33,758 - wren-ai-service - INFO - Registering provider: qdrant (loader.py:64)

2024-07-15 21:50:34,097 - wren-ai-service - INFO - Registering provider: azure_openai_embedder (loader.py:64)

2024-07-15 21:50:34,102 - wren-ai-service - INFO - Registering provider: ollama_embedder (loader.py:64)

2024-07-15 21:50:34,103 - wren-ai-service - INFO - Registering provider: openai_embedder (loader.py:64)

2024-07-15 21:50:34,104 - wren-ai-service - INFO - Registering provider: wren_ui (loader.py:64)

2024-07-15 21:50:34,105 - wren-ai-service - INFO - Registering provider: wren_ibis (loader.py:64)

2024-07-15 21:50:34,115 - wren-ai-service - INFO - Registering provider: azure_openai_llm (loader.py:64)

2024-07-15 21:50:34,120 - wren-ai-service - INFO - Registering provider: ollama_llm (loader.py:64)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Registering provider: openai_llm (loader.py:64)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Using AzureOpenAI LLM: gpt-4-32k (azure_openai.py:131)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Using AzureOpenAI LLM with API base: https://isi-oai-gen5-east-us2-sbx.openai.azure.com (azure_openai.py:132)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Using AzureOpenAI LLM with API version: 2023-03-15-preview (azure_openai.py:133)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Using Azure OpenAI Embedding Model: text-embedding-ada-002 (azure_openai.py:211)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Using Azure OpenAI Embedding API Base: https://isi-oai-gen5-east-us2-sbx.openai.azure.com (azure_openai.py:212)

2024-07-15 21:50:34,121 - wren-ai-service - INFO - Using Azure OpenAI Embedding API Version: 2023-03-15-preview (azure_openai.py:215)

2024-07-15 21:50:34,122 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,237 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,382 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,438 - wren-ai-service - INFO - Creating Azure OpenAI generator with model kwargs: {'temperature': 0, 'n': 1, 'max_tokens': 1000, 'response_format': {'type': 'json_object'}} (azure_openai.py:146)

2024-07-15 21:50:34,486 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,533 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,727 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,815 - wren-ai-service - INFO - Using Qdrant Document Store with Embedding Model Dimension: 1536 (qdrant.py:215)

2024-07-15 21:50:34,863 - wren-ai-service - INFO - Creating Azure OpenAI generator with model kwargs: {'temperature': 0, 'n': 1, 'max_tokens': 1000, 'response_format': {'type': 'json_object'}} (azure_openai.py:146)

2024-07-15 21:50:34,913 - wren-ai-service - INFO - Creating Azure OpenAI generator with model kwargs: {'temperature': 0, 'n': 1, 'max_tokens': 1000, 'response_format': {'type': 'json_object'}} (azure_openai.py:146)

2024-07-15 21:50:34,961 - wren-ai-service - INFO - Creating Azure OpenAI generator with model kwargs: {'temperature': 0, 'n': 1, 'max_tokens': 1000, 'response_format': {'type': 'json_object'}} (azure_openai.py:146)

2024-07-15 21:50:35,009 - wren-ai-service - INFO - Creating Azure OpenAI generator with model kwargs: {'temperature': 0, 'n': 1, 'max_tokens': 1000, 'response_format': {'type': 'json_object'}} (azure_openai.py:146)

Langfuse client is disabled. No observability data will be sent.

2024-07-15 21:50:35,077 - wren-ai-service - INFO - LANGFUSE_ENABLE: false (utils.py:138)

2024-07-15 21:50:35,077 - wren-ai-service - INFO - LANGFUSE_HOST: https://cloud.langfuse.com (utils.py:139)

INFO: Application startup complete.

INFO: Uvicorn running on http://0.0.0.0:5556 (Press CTRL+C to quit)

wren-ai-service has started.

INFO: 192.168.96.6:55172 - "POST /v1/semantics-preparations HTTP/1.1" 200 OK

2024-07-15 21:50:36,668 - wren-ai-service - INFO - MDL: {"schema":"public","catalog":"wrenai","models":[{"name":"characters","columns":[{"name":"description","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"description"}},{"name":"eye_color","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"eye_color"}},{"name":"gender","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"gender"}},{"name":"hair_color","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"hair_color"}},{"name":"height","type":"DOUBLE","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"height"}},{"name":"homeworld","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"homeworld"}},{"name":"id","type":"BIGINT","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"id"}},{"name":"name","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"name"}},{"name":"skin_color","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"skin_color"}},{"name":"species","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"species"}},{"name":"weight","type":"DOUBLE","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"weight"}},{"name":"year_born","type":"DOUBLE","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"year_born"}},{"name":"year_died","type":"DOUBLE","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"year_died"}},{"name":"events","type":"events","properties":null,"relationship":"CharactersEye_colorEventsDescription","isCalculated":false,"notNull":false},{"name":"events_description","type":"events","properties":null,"relationship":"EventsDescriptionCharactersHair_color","isCalculated":false,"notNull":false}],"tableReference":{"catalog":"memory","schema":"main","table":"characters"},"refSql":null,"cached":0,"refreshTime":null,"properties":{"displayName":"characters"},"primaryKey":""},{"name":"events","columns":[{"name":"date","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"date"}},{"name":"description","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"description"}},{"name":"event_name","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"event_name"}},{"name":"id","type":"BIGINT","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"id"}},{"name":"location","type":"VARCHAR","isCalculated":0,"notNull":0,"expression":"","properties":{"displayName":"location"}},{"name":"characters","type":"characters","properties":null,"relationship":"CharactersEye_colorEventsDescription","isCalculated":false,"notNull":false},{"name":"characters_hair_color","type":"characters","properties":null,"relationship":"EventsDescriptionCharactersHair_color","isCalculated":false,"notNull":false}],"tableReference":{"catalog":"memory","schema":"main","table":"events"},"refSql":null,"cached":0,"refreshTime":null,"properties":{"displayName":"events"},"primaryKey":""}],"relationships":[{"name":"CharactersEye_colorEventsDescription","models":["characters","events"],"joinType":"MANY_TO_ONE","condition":"\"characters\".eye_color

\"events\".description","properties":{}},{"name":"EventsDescriptionCharactersHair_color","models":["events","characters"],"joinType":"ONE_TO_MANY","condition":"\"events\".description = \"characters\".hair_color","properties":{}}],"views":[]} (ask.py:123)

2024-07-15 21:50:36,669 - wren-ai-service - INFO - Ask Indexing pipeline is running... (indexing.py:441)

2024-07-15 21:50:36,671 - wren-ai-service - INFO - Ask Indexing pipeline is clearing old documents... (indexing.py:44)

indexing view into the historical view question store: 0it [00:00, ?it/s]

2024-07-15 21:50:36,683 - wren-ai-service - INFO - Ask Indexing pipeline is writing new documents... (indexing.py:123)

indexing ddl commands into the ddl store: 100%|██████████| 2/2 [00:00<00:00, 28826.83it/s]

INFO: 192.168.96.6:55182 - "GET /v1/semantics-preparations/6f050ecc18842c6ab26e997ef050c2793439141a/status HTTP/1.1" 200 OK

2024-07-15 21:50:36,685 - wren-ai-service - INFO - Running Async OpenAI document embedder with documents: [] (azure_openai.py:174)

Calculating embeddings: 0it [00:00, ?it/s]

2024-07-15 21:50:36,686 - wren-ai-service - INFO - Running Async OpenAI document embedder with documents: [Document(id=0, content: '

/ {"alias":"characters"} /

CREATE TABLE characters (

-- {"alias":"description"}

description V...', meta: {'id': '0'}), Document(id=1, content: '

/ {"alias":"events"} /

CREATE TABLE events (

-- {"alias":"date"}

date VARCHAR,

-- {"alias":...', meta: {'id': '1'})] (azure_openai.py:174)

Calculating embeddings: 0%| | 0/1 [00:00<?, ?it/s]Calling QdrantDocumentStore.write_documents() with empty list

INFO: 192.168.96.6:55190 - "GET /v1/semantics-preparations/6f050ecc18842c6ab26e997ef050c2793439141a/status HTTP/1.1" 200 OK

INFO: 192.168.96.6:50404 - "GET /v1/semantics-preparations/6f050ecc18842c6ab26e997ef050c2793439141a/status HTTP/1.1" 200 OK

Calculating embeddings: 0%| | 0/1 [00:03<?, ?it/s]


Oh no an error! Need help with Hamilton?

Join our slack and ask for help! https://join.slack.com/t/hamilton-opensource/shared_invite/zt-1bjs72asx-wcUTgH7q7QX1igiQ5bbdcg


2024-07-15 21:50:40,356 - wren-ai-service - ERROR - ask pipeline - Failed to prepare semantics: Connection error. (ask.py:132)

Traceback (most recent call last):

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1522, in _request

response = await self._client.send(

           ^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1661, in send

response = await self._send_handling_auth(

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1689, in _send_handling_auth

response = await self._send_handling_redirects(

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1726, in _send_handling_redirects

response = await self._send_single_request(request)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1763, in _send_single_request

response = await transport.handle_async_request(request)

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 373, in handle_async_request

resp = await self._pool.handle_async_request(req)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request

raise exc from None

File "/app/.venv/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request

response = await connection.handle_async_request(

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request

raise exc

File "/app/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request

stream = await self._connect(request)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpcore/_async/connection.py", line 154, in _connect

stream = await stream.start_tls(**kwargs)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 80, in start_tls

raise exc

File "/app/.venv/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 71, in start_tls

ssl_stream = await anyio.streams.tls.TLSStream.wrap(

             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/anyio/streams/tls.py", line 132, in wrap

await wrapper._call_sslobject_method(ssl_object.do_handshake)

File "/app/.venv/lib/python3.12/site-packages/anyio/streams/tls.py", line 140, in _call_sslobject_method

result = func(*args)

         ^^^^^^^^^^^

File "/usr/local/lib/python3.12/ssl.py", line 917, in do_handshake

self._sslobj.do_handshake()

ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "/src/web/v1/services/ask.py", line 124, in prepare_semantics

await self._pipelines["indexing"].run(prepare_semantics_request.mdl)

File "/src/utils.py", line 118, in wrapper_timer

return await process(func, *args, **kwargs)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/utils.py", line 102, in process

return await func(*args, **kwargs)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 182, in async_wrapper

self._handle_exception(observation, e)

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 422, in _handle_exception

raise e

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 180, in async_wrapper

result = await func(*args, **kwargs)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/pipelines/indexing/indexing.py", line 442, in run

return await self._pipe.execute(

       ^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 173, in execute

raise e

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 164, in execute

outputs = await self.raw_execute(final_vars, overrides, display_graph,

inputs=inputs)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 136, in raw_execute

return await await_dict_of_tasks(task_dict)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 20, in await_dict_of_tasks

coroutines_gathered = await asyncio.gather(*coroutines)

                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 33, in process_value

return await val

       ^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 68, in new_fn

fn_kwargs = await await_dict_of_tasks(task_dict)

            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 20, in await_dict_of_tasks

coroutines_gathered = await asyncio.gather(*coroutines)

                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 33, in process_value

return await val

       ^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/hamilton/experimental/h_async.py", line 70, in new_fn

return await fn(**fn_kwargs)

       ^^^^^^^^^^^^^^^^^^^^^

File "/src/utils.py", line 118, in wrapper_timer

return await process(func, *args, **kwargs)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/utils.py", line 102, in process

return await func(*args, **kwargs)

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 182, in async_wrapper

self._handle_exception(observation, e)

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 422, in _handle_exception

raise e

File "/app/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 180, in async_wrapper

result = await func(*args, **kwargs)

         ^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/pipelines/indexing/indexing.py", line 348, in embed_ddl

return await ddl_embedder.run(documents=convert_to_ddl["documents"])

       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/backoff/_async.py", line 151, in retry

ret = await target(*args, **kwargs)

      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/providers/embedder/azure_openai.py", line 180, in run

embeddings, meta = await self._embed_batch(

                   ^^^^^^^^^^^^^^^^^^^^^^^^

File "/src/providers/embedder/azure_openai.py", line 141, in _embed_batch

response = await self.client.embeddings.create(

           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/resources/embeddings.py", line 215, in create

return await self._post(

       ^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1790, in post

return await self.request(cast_to, opts, stream=stream,

stream_cls=stream_cls)

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1493, in request

return await self._request

(

       ^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1546, in _request

return await self._retry_request(

       ^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1615, in _retry_request

return await self._request(

       ^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1546, in _request

return await self._retry_request(

       ^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1615, in _retry_request

return await self._request(

       ^^^^^^^^^^^^^^^^^^^^

File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1556, in _request

raise APIConnectionError(request=request) from err

openai.APIConnectionError: Connection error.

INFO: 192.168.96.6:50414 - "GET /v1/semantics-preparations/6f050ecc18842c6ab26e997ef050c2793439141a/status HTTP/1.1" 200 OK

Forcing deployment: {'data': {'deploy': {'status': 'FAILED', 'error': 'Wren AI Error: deployment hash:6f050ecc18842c6ab26e997ef050c2793439141a, Failed to prepare semantics: Connection error.'}}}

On Sun, 14 Jul 2024 at 20:55, Chih-Yu Yeh @.***> wrote:

@khyatimorparia https://github.com/khyatimorparia could you share log files to us?

— Reply to this email directly, view it on GitHub https://github.com/Canner/WrenAI/issues/516#issuecomment-2227659347, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR4LF657XEC7EK6SGJLVHJTZMNBZTAVCNFSM6AAAAABK3YBSHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRXGY2TSMZUG4 . You are receiving this because you were mentioned.Message ID: @.***>

cyyeh commented 1 month ago

hi @khyatimorparia you should run Wren AI by executing our launcher. It's much simpler. Is there any reason preventing you from running our launcher?

https://github.com/Canner/WrenAI/releases/tag/0.7.1

khyatimorparia commented 1 month ago

Hi! I want to use it with DuckDB parquet files. I have an Azure open AI API Key so I followed the steps for a custom model. Could you please help me out with how to go about this?

On Mon, 15 Jul 2024 at 18:54, Chih-Yu Yeh @.***> wrote:

hi @khyatimorparia https://github.com/khyatimorparia you should run Wren AI by executing our launcher. It's much simpler. Is there any reason preventing you from running our launcher?

https://github.com/Canner/WrenAI/releases/tag/0.7.1

— Reply to this email directly, view it on GitHub https://github.com/Canner/WrenAI/issues/516#issuecomment-2229851142, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR4LF6ZUNGIJFPK3O3TQZSLZMR4NXAVCNFSM6AAAAABK3YBSHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMRZHA2TCMJUGI . You are receiving this because you were mentioned.Message ID: @.***>

cyyeh commented 1 month ago

@khyatimorparia would you mind join our discord server? https://discord.gg/5DvshJqG8Z

Let's have a quick chat and you share your screen with me. It's much more efficient

Basically I found you started Wren AI using this command: docker-compose -f ~/.wrenai/docker-compose.yaml up -d

However, if you are using custom LLM, you should use another command: docker-compose -f ~/.wrenai/docker-compose.yaml -f ~/.wrenai/docker-compose.llm.yaml --env-file ~/.wrenai/.env--env-file ~/.wrenai/.env.ai up -d

If you run Wren AI using launcher, you don't need to memorize these! So I highly suggest you run Wren AI using launcher, which the documentation also said so.

image
khyatimorparia commented 1 month ago

[image: Screenshot 2024-07-16 at 8.52.05 AM.png] I'm not able to do that, have been trying since too long On Tue, 16 Jul 2024 at 08:29, Chih-Yu Yeh @.***> wrote:

@khyatimorparia https://github.com/khyatimorparia would you mind join our discord server? https://discord.gg/5DvshJqG8Z

— Reply to this email directly, view it on GitHub https://github.com/Canner/WrenAI/issues/516#issuecomment-2231233905, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR4LF64WIEIEPNBED52G6QDZMU34ZAVCNFSM6AAAAABK3YBSHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZRGIZTGOJQGU . You are receiving this because you were mentioned.Message ID: @.***>

cyyeh commented 1 month ago

@khyatimorparia how about this? https://discord.gg/xYVaBMas

khyatimorparia commented 1 month ago

It says Invalid Invite

On Tue, 16 Jul 2024 at 08:58, Chih-Yu Yeh @.***> wrote:

@khyatimorparia https://github.com/khyatimorparia how about this? https://discord.gg/xYVaBMas

— Reply to this email directly, view it on GitHub https://github.com/Canner/WrenAI/issues/516#issuecomment-2231295723, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR4LF637ZSIG7INHNQW5BCTZMU7IRAVCNFSM6AAAAABK3YBSHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZRGI4TKNZSGM . You are receiving this because you were mentioned.Message ID: @.***>

cyyeh commented 1 month ago

@khyatimorparia would you mind join our discord server? https://discord.gg/5DvshJqG8Z

Let's have a quick chat and you share your screen with me. It's much more efficient

Basically I found you started Wren AI using this command: docker-compose -f ~/.wrenai/docker-compose.yaml up -d

However, if you are using custom LLM, you should use another command: docker-compose -f ~/.wrenai/docker-compose.yaml -f ~/.wrenai/docker-compose.llm.yaml --env-file ~/.wrenai/.env--env-file ~/.wrenai/.env.ai up -d

If you run Wren AI using launcher, you don't need to memorize these! So I highly suggest you run Wren AI using launcher, which the documentation also said so. image

@khyatimorparia could you try the method I mentioned above? thanks

khyatimorparia commented 1 month ago

Sure, I tried both. Can you tell me when you have a spot available? I can share my screen with you

On Wed, 17 Jul 2024 at 09:16, Chih-Yu Yeh @.***> wrote:

@khyatimorparia https://github.com/khyatimorparia would you mind join our discord server? https://discord.gg/5DvshJqG8Z

Let's have a quick chat and you share your screen with me. It's much more efficient

Basically I found you started Wren AI using this command: docker-compose -f ~/.wrenai/docker-compose.yaml up -d

However, if you are using custom LLM, you should use another command: docker-compose -f ~/.wrenai/docker-compose.yaml -f ~/.wrenai/docker-compose.llm.yaml --env-file ~/.wrenai/.env--env-file ~/.wrenai/.env.ai up -d

If you run Wren AI using launcher, you don't need to memorize these! So I highly suggest you run Wren AI using launcher, which the documentation also said so. [image: image] https://private-user-images.githubusercontent.com/11023068/349185658-7c76bb5c-b69a-4ee0-97c5-40396a1202e2.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEyMzMyNTUsIm5iZiI6MTcyMTIzMjk1NSwicGF0aCI6Ii8xMTAyMzA2OC8zNDkxODU2NTgtN2M3NmJiNWMtYjY5YS00ZWUwLTk3YzUtNDAzOTZhMTIwMmUyLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE3VDE2MTU1NVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWI5NjFkMGE4NzRiZjZlODhjMjE0ZDRmMGZhNjhlMzc2YmZlYTBhMGQyNWE2OWU4OTVjODI4Mjk0NWRhNmIzZTImWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.uU2ii6f616Z9KJDhJplk5ovkhFSVbkLzwDOZCvDpQxw

@khyatimorparia https://github.com/khyatimorparia could you try the method I mentioned above? thanks

— Reply to this email directly, view it on GitHub https://github.com/Canner/WrenAI/issues/516#issuecomment-2233698683, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR4LF63VMG3FMNK22J6KFODZM2KHDAVCNFSM6AAAAABK3YBSHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZTGY4TQNRYGM . You are receiving this because you were mentioned.Message ID: @.***>

cyyeh commented 1 month ago

@khyatimorparia hi, what's your timezone there?

khyatimorparia commented 1 month ago

Its PST/PDT

On Fri, Jul 19, 2024 at 8:16 AM Chih-Yu Yeh @.***> wrote:

@khyatimorparia https://github.com/khyatimorparia hi, what's your timezone there?

— Reply to this email directly, view it on GitHub https://github.com/Canner/WrenAI/issues/516#issuecomment-2239438324, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR4LF64WQRLNJ7MPBGDRCSTZNEUWLAVCNFSM6AAAAABK3YBSHKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZZGQZTQMZSGQ . You are receiving this because you were mentioned.Message ID: @.***>

cyyeh commented 1 month ago

@khyatimorparia sorry for waiting for so long. How about some date around your 8-10 pm? I will send u a meeting invitation link