Closed david1542 closed 5 months ago
Hey @david1542 ð ðū This is a really interesting request and I'll share it with the team, but I don't believe there are any immediate plans for streaming HTTP requests with the Web API.
FWIW I've also experimented with this and found multiple edits to be alright. I get rate limited often though and will probably change this to send only one update per second to chat.update
ðĪŠ
Thanks for the quick reply! @zimeg The edits solution seems reasonable for now, I think I'll give it a shot :) I'd love to see native streaming in Slack though! I think it'd improve the UX of all the LLM bots these days
Hey @david1542, my example also does similar for reflecting chunked responses from OpenAI's API: https://github.com/seratch/ChatGPT-in-Slack Hope this helps.
ð It looks like this issue has been open for 30 days with no activity. We'll mark this as stale for now, and wait 10 days for an update or for further comment before closing this issue out. If you think this issue needs to be prioritized, please comment to get the thread going again! Maintainers also review issues marked as stale on a regular basis and comment or adjust status if the issue needs to be reprioritized.
As this issue has been inactive for more than one month, we will be closing it. Thank you to all the participants! If you would like to raise a related issue, please create a new issue which includes your specific details and references this issue number.
this features gives a great user experience for llm bot in slack. Any future plans on this?
Any official streaming support in Slack messaging API yet, or any such plan yet?
@aswinselva-sf @zhongli1990 we're tracking this request internally but don't have any updates or plans to share at this time. If anything changes around this we'll be sure to share updates, but the workaround above, or signaling progress with a :thinking:
reaction, are solutions that might help with the user experience for time-consuming computation in the meantime!
To increase visibility around requests for HTTP streaming of responses, it'd be appreciated to ð the initial comment!
please provide a chat.update
equivalent with high rate limit for this use case, thanks
Hey everyone,
Currently LLM APIs (like the OpenAI API) stream the LLM response token by token, since waiting for the entire response takes usually ~ 7-10 seconds.
Is there any intention in supporting streaming in the Slack platform? Namely, I'd like to build a chatbot in Slack and I want to stream its answers to the users. Currently there's no easy way of doing that and I simply post the answer once the LLM has finished.
I was thinking of doing multiple "edits" to the original message but I'm afraid it'd be rate-limited due to lots of API calls.
If you can shed some light on this issue, that would be great :)