ax-llm / ax

The unofficial DSPy framework. Build LLM powered Agents and "Agentic workflows" based on the Stanford DSP paper.
http://axllm.dev
Apache License 2.0
1.01k stars 66 forks source link

Accessing Stream Chunks (Streamed generation) #36

Open backslash112 opened 2 months ago

backslash112 commented 2 months ago

Any guidance on resolving these issues would be greatly appreciated. Thank you!

dosco commented 2 months ago

sorry was planning to fix this earlier was in the middle of our big migration to a monorepo. looking into this now.

dosco commented 2 months ago

fix in latest release

taieb-tk commented 3 weeks ago

I would like to reopen this issue. Number two, i dont understand on how to do it with .chat.

I can see that it is suppose to return a Readable Stream, and i have set the stream to true. But i cannot get it to work.

Any example or ideas @dosco ?

dosco commented 2 weeks ago

@taieb-tk have you looked at the streaming1.ts and streaming2.ts examples? stream: true enables streaming with the underlying llm provider to speed up things the final fields are not streamed out.

taieb-tk commented 1 week ago

@dosco Yes i did, i could not get it to work, probably a skill issue from my side. I tried to just use the

`const ai = new ax.AxAIOpenAI({ apiKey: apiKey as string, });

    ai.setOptions({ debug: true })

    const response = await ai.chat({
        chatPrompt: conversationHistory,

        config: {
            stream: true
        },
        ...(tools?.length && { functions: normalizeFunctions(tools) }),
    });

`

Not sure what to do with the response in the next step... Could you possibly help me? :)