Closed TechWithTy closed 4 months ago
@TechWithTy You can the error Error Code 429 - Rate limit reached for requests
with ChatGPT API calls when you hit your assigned rate limit for the OpenAI API or the model that you're trying to call. This means that you have submitted too many tokens requests in a short period of time OR have exceeded the number of requests allowed for your account OR you don't access to the model that you're trying to consume as part of the API call.
Ref the OpenAI API error codes for details.
To resolve this error:
gpt-4
and that model is not provided as part of the OpenAI free tier and it requires paid plan. You can change the the model in NLUX to provide a model that is included in your plan as shown below:const adapter = useAdapter({
model: 'gpt-3.5-turbo'
});
I hope this helps.
Hi Salmenus , It is confirmed that i have a paid plan and access to gpt 4, I have not made any api calls this month.
Hi TechWithTy, am have you managed to get it work using nextjs?
No i have not we are looking to build this from scratch unfortunately
@jspm2013 @TechWithTy I'll try to create a fully working example on Next.js (and fix any error that I may encounter)
I'll keep you posted via this thread
@TechWithTy @jspm2013 I was able to run NLUX with Next.js and OpenAI (free account).
I recorded the steps here: https://youtu.be/oqZ8pw2C7Yw
@nlux/react
latest on NPM v0.10.5
@TechWithTy @jspm2013 Can you give it a second try using latest version of NLUX ?
Hi and thanks for getting in touch, i'l give it a try and come back to you βοΈ
Yes @salmenus I was able to get it to work as well, the issue was that i did not have any api credits, however there is another security issue nlux open ai adapter connects to Open ai directly from browser(not safe as api key would be exposed ).
Glad to hear that you managed to get it work @TechWithTy π
Yes. Good call about the OpenAI adapter. It's is only recommended for testing and locally hosted web app. We mention that in the docs: https://nlux.dev/learn/adapters/open-ai/overview
Users still can implement custom adapters to connect to whatever they want.
EDIT: I saw you created an other issue for this OpenAI adapter. We will continue the discussion there. Much appreciated.
Ai wrapper
Rendering Modal
Error in console
Referrer Policy: strict-origin-when-cross-origin