ax-llm / ax

The unofficial DSPy framework. Build LLM powered Agents and "Agentic workflows" based on the Stanford DSP paper.
http://axllm.dev
Apache License 2.0
1.15k stars 80 forks source link

Accessing Stream Chunks (Streamed generation) #36

Open backslash112 opened 4 months ago

backslash112 commented 4 months ago

Any guidance on resolving these issues would be greatly appreciated. Thank you!

dosco commented 4 months ago

sorry was planning to fix this earlier was in the middle of our big migration to a monorepo. looking into this now.

dosco commented 4 months ago

fix in latest release

taieb-tk commented 3 months ago

I would like to reopen this issue. Number two, i dont understand on how to do it with .chat.

I can see that it is suppose to return a Readable Stream, and i have set the stream to true. But i cannot get it to work.

Any example or ideas @dosco ?

dosco commented 2 months ago

@taieb-tk have you looked at the streaming1.ts and streaming2.ts examples? stream: true enables streaming with the underlying llm provider to speed up things the final fields are not streamed out.

taieb-tk commented 2 months ago

@dosco Yes i did, i could not get it to work, probably a skill issue from my side. I tried to just use the

`const ai = new ax.AxAIOpenAI({ apiKey: apiKey as string, });

    ai.setOptions({ debug: true })

    const response = await ai.chat({
        chatPrompt: conversationHistory,

        config: {
            stream: true
        },
        ...(tools?.length && { functions: normalizeFunctions(tools) }),
    });

`

Not sure what to do with the response in the next step... Could you possibly help me? :)

taieb-tk commented 1 month ago

Bump any help would be appriciated :)

dosco commented 1 month ago

The bug is in the line below which is wrong. Also typescript should catch this it's even in the api docs. https://axllm.dev/apidocs/classes/axai/

 config: {
            stream: true
        },

It should be

ai.setOptions({ debug: true })

const response = await ai.chat({
    chatPrompt: conversationHistory,
    ...(tools?.length && { functions: normalizeFunctions(tools) }),
}, {
   stream: true
});
taieb-tk commented 1 month ago

The response is suppose to be asyncgenerator? Any examples on how to catch that stream?

taieb-tk commented 1 month ago

Ahh ok! I read a bit fast, i will try that! thanks!! :)

dosco commented 1 month ago

use an async for loop like in the streaming examples

Vikram Rangnekar

On Mon, Oct 14, 2024 at 12:56 PM taieb-tk @.***> wrote:

The response is suppose to be asyncgenerator? Any examples on how to catch that stream?

— Reply to this email directly, view it on GitHub https://github.com/ax-llm/ax/issues/36#issuecomment-2412094721, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAGLF277TMKA4E4RFB64GRDZ3QOWRAVCNFSM6AAAAABKDK5QRWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJSGA4TINZSGE . You are receiving this because you were mentioned.Message ID: @.***>

taieb-tk commented 1 month ago

Your answer above solved my problem! Really appriciate the help. Looking forward to continue testing this 👍