langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.42k stars 2.1k forks source link

streaming is not working with titan model (amazon.titan-text-express-v1) #3179

Closed yadavPavan94 closed 5 months ago

yadavPavan94 commented 11 months ago

import { userHashedId } from "@/features/auth/helpers"; import { CosmosDBChatMessageHistory } from "@/features/langchain/memory/cosmosdb/cosmosdb"; import { ConversationChain } from "langchain/chains"; import { BufferWindowMemory } from "langchain/memory"; import { PromptTemplate, } from "langchain/prompts"; import { Bedrock } from "langchain/llms/bedrock"; import { initAndGuardChatSession } from "../chat-services/chat-thread-service"; import { PromptGPTProps } from "../chat-services/models"; import { transformConversationStyleToTemperature } from "../chat-services/utils"; import { LangChainStream, StreamingTextResponse } from "ai"; const templates: { [key: string]: string } = { //gpt-4 titan gpt-3.5 claude "gpt-3.5": `\n\nHuman:The following is a conversation between a human and an AI. The AI is responsible to answer questions related to cpg,retail,industrial,mathematical,bfsi,analytical,finance,engineering,general knowledge,logical,reasoning,network, security,safety,companies etc. The AI is talkative, knowledgeable helpful assistant and provides lots of details from all the domains . The AI should provide information about anything you want. If the AI does not know the answer to a question, it truthfully says it does not know.

Current conversation: {history} {input}\n\nAssistant:, "gpt-4":\n\nHuman:The following is a conversation between a human and an AI. The AI is talkative, knowledgeable helpful assistant and provides lots of details from all the domains . The AI should provide information about anything you want.

Current conversation : {history} {input} AI:\n\nAssistant:` } export const ChatSimple = async (props: PromptGPTProps) => { const { stream, handlers } = LangChainStream(); const { lastHumanMessage, id, chatThread, model } = await initAndGuardChatSession(props);

console.log("model", model); //const { stream, handlers } = LangChainStream();

const userId = await userHashedId();

const llm = new Bedrock({ model: model == "gpt-4" ? process.env.BEDROCK_MODEL_TITAN : process.env.BEDROCK_MODEL_CLAUDE, region: process.env.BEDROCK_REGION ?? "", credentials: { accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID ?? "", secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY ?? "", }, temperature: transformConversationStyleToTemperature( chatThread.conversationstyle ), maxTokens: 800, streaming: true // modelKwargs: getKwargs(model), }); const memory = new BufferWindowMemory({ k: 100, returnMessages: true, memoryKey: "history", chatHistory: new CosmosDBChatMessageHistory({ sessionId: id, userId: userId, }), });

const chatPrompt = new PromptTemplate({ inputVariables: ["history", "input"], template: templates[model] })

const chain = new ConversationChain({ llm, memory, prompt: chatPrompt, verbose: false });

//const result = await chain.call({ input: lastHumanMessage.content }); chain.call({ input: lastHumanMessage.content }, [handlers]); // let answer = result["response"]; // return new Response(answer);

return new StreamingTextResponse(stream); };

dosubot[bot] commented 11 months ago

🤖

Hello @yadavPavan94,

I'm Dosu, a helpful bot here on the langchainjs repository. I'm designed to assist you with solving bugs, guiding you through the process of becoming a contributor, and answering any questions you might have about the project.

While we wait for a human maintainer to join us, I'm here to provide you with some preliminary assistance. I'm currently taking a look at your issue and will provide you with a more comprehensive answer shortly.

dosubot[bot] commented 11 months ago

🤖

Based on the context provided, it seems like the streaming feature is not working with the titan model (amazon.titan-text-express-v1) in the langchainjs framework because the code provided is using the OpenAI model, not the titan model. The OpenAI model is initialized with streaming set to true, but there is no mention or use of the titan model in the provided code.

To use the titan model with streaming, you would need to import and initialize it similarly, ensuring that the streaming option is set to true. However, the exact implementation would depend on the specifics of the titan model's API and how it's integrated into the langchainjs framework.

In the context shared, the example code is using the OpenAI model with the LLMChain class in streaming mode. If you want to use the titan model, you would need to replace the OpenAI model with the titan model in the new OpenAI({ temperature: 0.9, streaming: true }); line. The titan model should be imported and initialized in a similar way to the OpenAI model. The handleLLMNewToken function in the chain.call method is used to handle the streamed tokens. If the titan model supports streaming, it should work in a similar way.

You can refer to these examples for more details:

However, without the exact error message or behavior observed when trying to use streaming with the titan model, it's hard to provide a more specific solution. Could you please provide more details about the error you're encountering? This will help in diagnosing the issue more accurately.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

jacoblee93 commented 11 months ago

Not all models support streaming in multiple chunks unfortunately - I believe Titan is currently one of them. Will leave open for now but can't fix until Bedrock supports it.

ninjamaster1337 commented 11 months ago

Looks like on their docs, titan supports streaming through the invoke-with-response-stream method

https://docs.aws.amazon.com/bedrock/latest/APIReference/API_runtime_InvokeModelWithResponseStream.html

gkgourav45 commented 8 months ago

I am also facing same issue. Streaming works fine with claude model but not with titan model. Tried directly using bedrock apis of titan streaming works fine.

dosubot[bot] commented 5 months ago

Hi, @yadavPavan94

I'm helping the langchainjs team manage their backlog and am marking this issue as stale. From what I understand, the issue you opened is related to streaming not working with the Titan model in the Amazon Titan Text Express v1. There have been responses from various contributors, including guidance on using the Titan model with streaming, notes about the current limitations of the Titan model supporting streaming, and additional insights from other users.

Could you please confirm if this issue is still relevant to the latest version of the langchainjs repository? If it is, please let the langchainjs team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days. Thank you!