Closed matthieuganivet closed 4 months ago
+1 Facing same issue. For me the api http://localhost:3000/api/graphql is returning 400 with the following response
{ "errors": [ { "locations": [ { "line": 2, "column": 3 } ], "path": [ "createAskingTask" ], "message": "Request failed with status code 400", "extensions": { "code": "INTERNAL_SERVER_ERROR", "message": "Request failed with status code 400", "shortMessage": "Internal server error" } } ], "data": null }
@rohitranjan1991 @matthieuganivet could you share with us what is your OS version, docker version and WrenAI version? Thank you!
@cyyeh I'm able to reproduce this issue by using an OpenAI API key under free plan OpenAI account.
Errors displayed in containers: (notice You exceeded your current quota...
part)
Calculating embeddings: 0%| | 0/1 [00:00<?, ?it/s]
Calculating embeddings: 0%| | 0/1 [00:03<?, ?it/s]
2024-05-25 04:04:44 wren-ai-service-1 | 2024-05-24 20:04:44,179 - wren-ai-service - ERROR - ask pipeline - Failed to prepare semantics: Error code: 429 - {'error': {'message': 'You exceeded your current quota, please check your plan and billing details. For more information on this error, read the docs: https://platform.openai.com/docs/guides/error-codes/api-errors.', 'type': 'insufficient_quota', 'param': None, 'code': 'insufficient_quota'}} (ask.py:121)
If anyone saw the same error on UI, let us know if you saw the same error messages in logs. Also, @rohitranjan1991 @matthieuganivet could you guys check if your account is using free plan or not ?
@matthieuganivet @rohitranjan1991 or maybe another possible condition here is that please make sure the model is deployed successfully by going to the modeling page, there should be a "synced" message shown on the top right of the page. If it's not showing "synced", please click the deploy button. After the model is deployed successfully, you can start to ask questions.
Hello, for me, after deleting the containers & the docker volume wrenai_data all together, and reinstall every thing , it's running fine.
@wwwy3y3 i have similar issue. i think because we are both unpaid users for chatgpt
@creativeson thanks for letting us know.
We're thinking simply adding an API validation in our launcher, so users will know why the API key failed to work at the very start.
Fixed for me by clicking on deploy and syncing the table
+1
For me the http://localhost:3000/api/graphql API is returning status 200, with following error
{ "errors": [ { "locations": [ { "line": 2, "column": 3 } ], "path": [ "createAskingTask" ], "message": "Cannot read properties of null (reading 'hash')", "extensions": { "code": "INTERNAL_SERVER_ERROR", "message": "Cannot read properties of null (reading 'hash')", "shortMessage": "Internal server error" } } ], "data": null }
+1
For me the http://localhost:3000/api/graphql API is returning status 200, with following error
{ "errors": [ { "locations": [ { "line": 2, "column": 3 } ], "path": [ "createAskingTask" ], "message": "Cannot read properties of null (reading 'hash')", "extensions": { "code": "INTERNAL_SERVER_ERROR", "message": "Cannot read properties of null (reading 'hash')", "shortMessage": "Internal server error" } } ], "data": null }
same here, +1
@zfanswer @Ramana-vummenthala could you try re-deploying the model by going to the modeling page and click the deploy button on the top right. If the model is deployed successfully, you will see "Synced" to the left of the deploy button. Then, you can try asking questions again
Hi,
Juste after installing, at first used / first prompt, I got this error : Failed to create asking task.
OpenAI token is OK and checked on another system.
I tried to restart an reinstall twice, I don't really know what to do now.
The full log are here :