langchain-ai / langchain-aws

Build LangChain Applications on AWS
MIT License
94 stars 68 forks source link

langchain_aws/llms/bedrock.py "stop_sequence" hardcoded for msg_type == "message_delta" - breaks with bedrock guardrails with langchain agents #214

Open shayanrayamzn opened 4 hours ago

shayanrayamzn commented 4 hours ago

Hello,

In langchain_aws/llms/bedrock.py the "stop_sequence" hardcoded for msg_type == "message_delta" - Shouldn't this be captured using the LLM provider type with self._get_provider()

This breaks integration with bedrock guardrails and langchain agents when guardrails intervene


 elif msg_type == "message_delta":
            return AIMessageChunk(
                content="",
                response_metadata={
                    "stop_reason": stream_response["delta"]["stop_reason"],
                    **"stop_sequence": stream_response["delta"]["stop_sequence"],**
                },
            )
shayanrayamzn commented 4 hours ago

If there are any workarounds for this fix please let me know. adding a provider_stop_sequence_key_name_map works for other issues but not in this case, when guardrails intervene and enter the msg_type == "message_delta" block

llm.provider_stop_sequence_key_name_map = {'anthropic': 'stop_sequences', 
                                                            'amazon': 'stopSequences',
                                                            'ai21': 'stop_sequences',
                                                            'cohere': 'stop_sequences',
                                                            'mistral': 'stop'}