Open whoabuddy opened 1 year ago
Can you check whether your OpenAI account has an access to chatGPT-4 model from your API key ? List all the available models for your account with curl
curl https://api.openai.com/v1/models \ -H "Authorization: Bearer $OPENAI_API_KEY"
and see whether it contains chatgpt-4 model from response
+1 have subscribe at openai and api-key and access to very long list of models including "gpt 3.5" and "gpt 3.5 turbo"
however, no access to gpt 4
prereq "get api key" , get "pay as you go account w billing" are NOT enough for openai to be the config'd model
Thanks @jojonoparat looking at the list gpt4 isn't on there, I assumed it would be but just signed up for the waitlist. Is that the problem? Could this work with 3.5?
I have not tried it myself but may be you could hard code it from this file by adding or changing some line like this
model = "gpt-3.5-turbo" //<-- your available model name
The current prompts were not designed to support gpt-3.5 context size - so they may get very unexpected behaviours
thank you both for the quick replies
@jojonoparat that makes sense, looking there you could use a prompt per L35 too that would override the config :sunglasses:
@PicoCreator in that case I"ll leave it as-is and hope for access to Claude or GPT-4 soon!
It would be helpful if there was:
while wait for 4.0 , i tried override ( hard-code to "gpt-3.5-turbo-16k-0613")
no change except the literal val in error.msg...
- ## Recieved error ...
- [invalid_request_error] undefined
- ## Unable to handle prompt for ...
- {"model":"gpt-3.5-turbo-16k-0613","temperature":0,"max_tokens":3923,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"system","content":"You are an assistant, who can only reply in JSON object, reply with a yes (in a param named 'reply') if you understand"},{"role":"assistant","content":"{\"reply\":\"yes\"}"},{"role":"user","content":"[object Object]\n[object Object]\n[object Object]\n[object Object]\n[object Object]"},{"role":"user","content":"Please update your answer, and respond with only a single JSON object, in the requested
Iβm having the same problem. The weird thing is, the prompt works at first and GPT correctly lays out the plan. Just once I actually confirm the plan and it tries to do it, it breaks. So yeah, seems to be ai-bridge.
π£ [ai]: Working on the plan ...
π£ [ai]: Studying 0 dependencies (in parallel)
π£ [ai]: Performing any required modules install / file moves / deletion
π£ [ai]: Studying 0 dependencies (awaiting in parallel)
π£ [ai]: Preparing summaries for smol-er sub-operations ...
## Unable to handle prompt for ...
{"model":"gpt-4","temperature":0,"max_tokens":5394,"top_p":1,"frequency_penalty":0,"presence_penalty":0,"messages":[{"role":"user","content":"You are an AI developer...."}]}
## Recieved error ...
[tokens] undefined
Error: Missing valid openai response, please check warn logs for more details
at getChatCompletion (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/openai/getChatCompletion.js:290:8)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Object.promiseGenerator (/Users/liam/.asdf/installs/nodejs/19.2.0/lib/node_modules/smol-dev-js/node_modules/ai-bridge/src/AiBridge.js:249:11)
I am really excited about the concept here but after the initial setup ran into some errors on Linux Mint 20.3 (Ubuntu Focal).
Steps taken:
npm i -g smol-dev-js
smol-dev-js setup
smole-dev-js run
Output from terminal:
Also noticed
Recieved
is misspelled in the error above :microscope:How can I check the warn logs that were mentioned?