theodo-group / LLPhant

LLPhant - A comprehensive PHP Generative AI Framework using OpenAI GPT 4. Inspired by Langchain
MIT License
775 stars 79 forks source link

How to use stream to return a message even instead of waiting until everything has been processed #115

Open pslxx opened 4 months ago

pslxx commented 4 months ago

How to use stream to return a message even instead of waiting until everything has been processed

MaximeThoonsen commented 4 months ago

hey @pslxx , there are dedicated method in the ChatInterface like generateStreamOfText. Does it help you?

messi89 commented 4 weeks ago

@MaximeThoonsen the generateStreamOfText function use guzzle request (not async request), so theoretically the method waits for the whole response from the ollama api.

Is there any way to read streamed stream from ollama response directly ?

iztok commented 2 weeks ago

+1 Looking to iterate through each stream chunk but stream methods return StreamInterface that doesn't allow this (https://github.com/theodo-group/LLPhant/issues/78#issuecomment-1939347314)

iztok commented 2 weeks ago

If anyone finds this helpful:

        $streamToIterator = function (StreamInterface $stream): Generator {
            while (!$stream->eof()) {
                yield $stream->read(32); // Adjust the chunk size as needed
            }
        };
        $iteratorStream = $streamToIterator($stream);

        foreach ($iteratorStream as $chunk) {
            // chunks are not token based anymore!
        }
MaximeThoonsen commented 2 weeks ago

hello @ezimuel, how are you?

It seems there is a lot of questions around streaming. Can we still do streaming with StreamInterface and LLPhant? What is the "clean/simple" working example?

@iztok the code your provided is working for you to get a stream?

iztok commented 2 weeks ago

@iztok the code your provided is working for you to get a stream?

Yes, this returns an iterateable stream I can use the same as I used the stream from the OpenAI library. One caveat is that this stream's chunks are not tokens but strings of size 32 bytes. I'm then broadcasting these chunks over the WebSocket to my chat clients.

prykris commented 1 week ago

Yes, this returns an iterateable stream I can use the same as I used the stream from the OpenAI library. One caveat is that this stream's chunks are not tokens but strings of size 32 bytes. I'm then broadcasting these chunks over the WebSocket to my chat clients.

@iztok I see how that is a caveat. Does that make any difference in your use case or does it seriously impact the end-user experience?

I am trying to understand the pitfalls I might run into while trying to implement something similar.