PicoCreator / smol-dev-js

Smol personal AI, for smol incremental tasks in a JS project
MIT License
325 stars 53 forks source link

Request errors on initial use: Missing valid openai response #11

Open whoabuddy opened 1 year ago

whoabuddy commented 1 year ago

I am really excited about the concept here but after the initial setup ran into some errors on Linux Mint 20.3 (Ubuntu Focal).

Steps taken:

Output from terminal:

$ smol-dev-js run
--------------------
🐣 [ai]: hi its me, the ai dev ! you said you wanted
         here to help you with your project, which is a ....
--------------------
CityCoins are cryptocurrencies that allow you to support your favorite cities while earning Stacks and Bitcoin.
--------------------
🐣 [ai]: What would you like me to do? (PS: this is not a chat system, there is no chat memory prior to this point)
βœ” [you]:  … Suggest something please
🐣 [ai]: (node:147273) ExperimentalWarning: The Fetch API is an experimental feature. This feature could change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7905,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"}]}
## Recieved error ...
[invalid_request_error] undefined
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":7873,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested format. No apology is needed."}]}
## Recieved error ...
[invalid_request_error] undefined
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)
Last Completion null
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/home/whoabuddy/.nvm/versions/node/v18.12.1/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)

Also noticed Recieved is misspelled in the error above :microscope:

How can I check the warn logs that were mentioned?

jojonoparat commented 1 year ago

Can you check whether your OpenAI account has an access to chatGPT-4 model from your API key ? List all the available models for your account with curl curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY"

and see whether it contains chatgpt-4 model from response

rowntreerob commented 1 year ago

+1 have subscribe at openai and api-key and access to very long list of models including "gpt 3.5" and "gpt 3.5 turbo"

however, no access to gpt 4

prereq "get api key" , get "pay as you go account w billing" are NOT enough for openai to be the config'd model

whoabuddy commented 1 year ago

Thanks @jojonoparat looking at the list gpt4 isn't on there, I assumed it would be but just signed up for the waitlist. Is that the problem? Could this work with 3.5?

jojonoparat commented 1 year ago

image I have not tried it myself but may be you could hard code it from this file by adding or changing some line like this model = "gpt-3.5-turbo" //<-- your available model name

PicoCreator commented 1 year ago

The current prompts were not designed to support gpt-3.5 context size - so they may get very unexpected behaviours

whoabuddy commented 1 year ago

thank you both for the quick replies

@jojonoparat that makes sense, looking there you could use a prompt per L35 too that would override the config :sunglasses:

@PicoCreator in that case I"ll leave it as-is and hope for access to Claude or GPT-4 soon!

It would be helpful if there was:

rowntreerob commented 1 year ago

while wait for 4.0 , i tried override ( hard-code to "gpt-3.5-turbo-16k-0613")

no change except the literal val in error.msg...

- ## Recieved error ...
- [invalid_request_error] undefined
- ## Unable to handle prompt for ...
- {"model":"gpt-3.5-turbo-16k-0613","temperature":0,"max_tokens":3923,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested
liam-k commented 1 year ago

I’m having the same problem. The weird thing is, the prompt works at first and GPT correctly lays out the plan. Just once I actually confirm the plan and it tries to do it, it breaks. So yeah, seems to be ai-bridge.

🐣 [ai]: Working on the plan ...
🐣 [ai]: Studying 0 dependencies (in parallel)
🐣 [ai]: Performing any required modules install / file moves / deletion
🐣 [ai]: Studying 0 dependencies (awaiting in parallel)
🐣 [ai]: Preparing summaries for smol-er sub-operations ...
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":5394,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"user","content":"You are an AI developer...."}]}

## Recieved error ...
[tokens] undefined
Error: Missing valid openai response, please check warn logs for more details
    at getChatCompletion (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async Object.promiseGenerator (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)