Open othmanelhoufi opened 1 year ago
As of now, the words should be printed one by one as the come through the stream, if you have a fork are you up to date with main?
I am up to date with the main branch, I think maybe because I changed "a little bit" the request/response function so that it can work with the Azure OpenAI API, but I don't see how my changes could've affected the stream functionality.
Are you sure it actually works ?
I have observed the following behavior: When using the app with "yarn dev" the answer is given word by word as expected. However, if you start the app in Production (e.g. "yarn install && yarn build && yarn start"), this no longer works. At the moment I haven't found out what the problem is.
OK, I have found the problem. It was due to my NGINX configuration. Sorry for the confusion. With the following proxy configuration, the app behaves as expected even in Production.
location / {
proxy_pass http://127.0.0.1:3012;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
chunked_transfer_encoding on;
}
Thanks for your work!
Thanks for your input, but actually I made a fork in order to re-adapt the app for Azure OpenAI API (it's not totally the same as OpenAI API), doing so, I no longer have the feature of having the answer word by word even that Azure API allows it, it made me think that maybe I made a mistake.
Can you please give it a quick look ?
Hi,
Thanks again for this wonderful tool. I noticed that you have two files "OpenAI.ts" and "OpenAIProvider.tsx" , I suppose the second is to keep track of conversation, although, the interface is frozen until the response is completely done then the entire message is printed. I was thinking a good way is to print word by word as they come from the stream. I tried to edit this but can't seem to solve it.