Canner / WrenAI

🚀 Open-source SQL AI Agent for Text-to-SQL. Make Text2SQL Easy! 🙌
https://www.getwren.ai/oss
GNU Affero General Public License v3.0
1.67k stars 146 forks source link

Internal server error: returning * - table project has no column named configurations #387

Open andre-scheinwald opened 3 months ago

andre-scheinwald commented 3 months ago

Describe the bug I'm trying to run the tool using the github repo based on these instructions under "How to start". I've created and edited my .env.local file to run off port 8080. 3000 is occupied by the .exe launcher. I've ensured that the docker container the .exe creates and runs is stopped. When I run docker-compose --env-file .env.local up I navigate to localhost:8080 and can select db type. On the next screen for connection set up I put in all my credentials and select Next. That is when I encounter the "Internal server error...returning * - table project has no column named configurations" error message.

I've tested this on two different postgres databases and get the same error message.

To Reproduce Steps to reproduce the behavior:

  1. Create .env.local with personal credentials.
  2. Change port to 8080.
  3. Run docker-compose --env-file .env.local up.
  4. Go through set up process in "Connect the data source" screen.
  5. Select Next.
  6. "Internal server error...returning * - table project has no column named configurations" occurs.

Expected behavior The data source to properly connect

Screenshots image

Desktop (please complete the following information):

WrenAI Information

Additional Information I don't have a project table in my db. And considering this is insert - I'm assuming this is within wrenai?

wwwy3y3 commented 2 months ago

@andre-scheinwald

Hmm, I suspect that you're using the volume created by the .exe launcher. Could you check if you're using the same volume ? if so, could you delete the containers & volume all together and try again? since you're going to use docker-compose yourself.

andre-scheinwald commented 2 months ago

@wwwy3y3 Deleting the containers and volumes got me through the connection steps, thanks!

Basically what I'm trying to do is have the docker version connect to one database. And the launcher connect to a different database. That way I can retain work in both.

Now that I have the connections and relationships working in the docker and launcher versions. I've tried asking questions in both. The launcher version works fine. However the docker version returns a "Failed to create asking task" error. I haven't tried running them simultaneously when I get this error message in the docker version. And I ensured that the launcher container is stopped.

image

Do you have any thoughts on this?

wwwy3y3 commented 2 months ago

The launcher version works fine. However the docker version returns a "Failed to create asking task" error. I haven't tried running them simultaneously when I get this error message in the docker version. And I ensured that the launcher container is stopped.

I'm guessing you're putting an invalid OpenAI API key in the docker .env file. You could copy the .env file launcher's been using at ~/.wrenai/.env.

would be sth like following:

COMPOSE_PROJECT_NAME=wren
PLATFORM=linux/amd64

# service port
WREN_ENGINE_PORT=8080
WREN_ENGINE_SQL_PORT=7432
WREN_AI_SERVICE_PORT=5555

# version
# CHANGE THIS TO THE LATEST VERSION
WREN_PRODUCT_VERSION=0.3.5
WREN_ENGINE_VERSION=0.4.5
WREN_AI_SERVICE_VERSION=0.4.0
WREN_UI_VERSION=0.5.7
WREN_BOOTSTRAP_VERSION=0.1.4

# keys
# CHANGE THIS TO YOUR OPENAI API KEY
OPENAI_API_KEY=<API KEY> # -> replace with your own key
OPENAI_GENERATION_MODEL=gpt-4o

# SQL Protocol
PG_USERNAME=wren-user
# PG_PASSWORD will be replaced with random string by launcher
PG_PASSWORD=...

# user id (uuid v4)
USER_UUID=f5bc7ee1-0a0b-4ebe-b02f-533d7bb5421a

# for other services
POSTHOG_API_KEY=...
POSTHOG_HOST=https://app.posthog.com
TELEMETRY_ENABLED=true

# the port exposes to the host
# OPTIONAL: change the port if you have a conflict
HOST_PORT=3001
AI_SERVICE_FORWARD_PORT=5556

Related discussion on discord: https://discord.com/channels/1227143286951514152/1243764365887537162

andre-scheinwald commented 2 months ago

I wish it was the openapi key! I found two different issues.

  1. One is there appears to be a limit to number of tables + columns? I finally noticed that when asking a question not only would I get the "Failed to create asking task error message" but the terminal was printing the following error message:

Failed to prepare semantics: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 10018 tokens (10018 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

I'm only loading in 16 tables but the column length in a number of these tables is over 100. Decreasing the number of tables used resolves the token error. It appears that I can't delete tables from the modeling tab and deploy changes. I have to reset under settings and start over.

  1. The next error was related to invalid model. I thought this had worked in the past - but maybe not. In my .env.local I had written "gpt-4.0-turbo". I changed this to "gpt-4-turbo" and can now retrieve results. Would it be possible to add the text for the model options to the .env.example file?
cyyeh commented 2 months ago

I wish it was the openapi key! I found two different issues.

  1. One is there appears to be a limit to number of tables + columns? I finally noticed that when asking a question not only would I get the "Failed to create asking task error message" but the terminal was printing the following error message:

Failed to prepare semantics: Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens, however you requested 10018 tokens (10018 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.", 'type': 'invalid_request_error', 'param': None, 'code': None}}

I'm only loading in 16 tables but the column length in a number of these tables is over 100. Decreasing the number of tables used resolves the token error. It appears that I can't delete tables from the modeling tab and deploy changes. I have to reset under settings and start over.

  1. The next error was related to invalid model. I thought this had worked in the past - but maybe not. In my .env.local I had written "gpt-4.0-turbo". I changed this to "gpt-4-turbo" and can now retrieve results. Would it be possible to add the text for the model options to the .env.example file?

@andre-scheinwald

  1. thanks for raising the issue. "It appears that I can't delete tables from the modeling tab and deploy changes. I have to reset under settings and start over." -> do you mean deployment was failed, so you couldn't make changes?

  2. thanks for your suggestions!