langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
91.39k stars 14.54k forks source link

Support for Amazon Bedrock #2828

Closed mats16 closed 4 months ago

mats16 commented 1 year ago

Hello,

I would like to request the addition of support for Amazon Bedrock to the Langchain library. As Amazon Bedrock is a new service, it would be beneficial for Langchain to include it as a supported platform.

2023-04-13 Amazon announced the new service Amazon Bedrock. Blog: https://aws.amazon.com/blogs/machine-learning/announcing-new-tools-for-building-with-generative-ai-on-aws/

ellisonbg commented 1 year ago

Hi all, my team at AWS is working on this, more to report soon!

mats16 commented 1 year ago

So cool! Is there anything LangChain users can do to help?

ellisonbg commented 1 year ago

We will post in this issue when we have a PR open. We would love help reviewing and testing as people get access to the service. If anyone wants to chat in the meantime, please DM me on Twitter.

shayneoneill commented 1 year ago

bump

waadarsh commented 1 year ago

Any news on this?

3coins commented 1 year ago

Completed with #5464

rajeshkumarravi commented 1 year ago

There seems to be minor bug while checking for user provided Boto3 client causing Bedrock client not being initalized resulting in invoke_model to fail.

Error Log:

Traceback (most recent call last):
  File "****************************/.venv/lib/python3.11/site-packages/langchain/llms/bedrock.py", line 181, in _call
    response = self.client.invoke_model(
               ^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'invoke_model'

workaround: Just initialize Bedrock Boto3 client outside pass it to Bedrock LLM object creation.

import boto3
from langchain.llms.bedrock import Bedrock

BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')
llm = Bedrock(model_id="amazon.titan-tg1-large", client=BEDROCK_CLIENT)
3coins commented 1 year ago

@rajeshkumarravi Thanks for reporting this issue. What version of LangChain did you see this issue? This should be fixed in v0.0.189. See the related PR #5574.

garystafford commented 1 year ago

I still appear to have the issue in v0.0.189, which is fixed with @rajeshkumarravi fix. @3coins, maybe it is in the next release?

sudhir2016 commented 1 year ago

I am also getting the same issue. Error raised by bedrock service: 'NoneType' object has no attribute 'invoke_modeand I using v0.0.189

3coins commented 1 year ago

@garystafford @sudhir2016 @garystafford There is another PR for similar fix in the LLM class, which is not released yet. https://github.com/hwchase17/langchain/pull/5629

rpauli commented 1 year ago

I can't find the boto3.client the implementation is using, there a dev version?

JasonWeill commented 1 year ago

You can find info about boto3 here: https://github.com/boto/boto3

rpauli commented 1 year ago

I know about boto3, the latest version ('1.26.154') doesn't contain the client for bedrock though botocore.exceptions.UnknownServiceError: Unknown service: 'bedrock'

3coins commented 1 year ago

@rpauli Bedrock is not GA yet, so it is not released in the publicly available boto3. You have to first request access to Bedrock in order to get access to the boto3 wheels that has implemented the bedrock API. Please check the Bedrock home page for more info. https://aws.amazon.com/bedrock/

mendhak commented 1 year ago

For current searchers while Bedrock is still in preview - once you get Bedrock access, click the Info > User Guide. In the User Guide you can find a set of instructions which include accessing boto3 wheels.

jflopezcolmenarejo commented 1 year ago

Thanks a lot @mendhak . I got access but I have not been able to find that "Info > User Guide" that you mentioned. Could you be a little bit more explicit? I am facing issues to apply the fix described by @rajeshkumarravi

mendhak commented 1 year ago

Hi there go to https://us-east-1.console.aws.amazon.com/bedrock/home?region=us-east-1#/text-playground and click the 'Info' next to 'Text playground'. It opens a side panel, and look for the user guide at the bottom.

image

jflopezcolmenarejo commented 1 year ago

Thanks a lot!!! Much appreciated!

mendhak commented 1 year ago

I'm getting "Could not load credentials to authenticate with AWS Client", am I missing something below? Installed the preview boto3 wheels from Amazon, and I've got latest langchain 0.0.229

I've got my AWS credentials in the environment variables (and tested with sts) so I was hoping not to have to pass any profile name:

from langchain.llms.bedrock import Bedrock
llm = Bedrock(model_id="amazon.titan-tg1-large")

Traceback (most recent call last): File "/home/ubuntu/Projects/langchain_tutorials/bedrock.py", line 2, in llm = Bedrock(model_id="amazon.titan-tg1-large") File "/home/ubuntu/Projects/langchain_tutorials/.venv/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in init super().init(**kwargs) File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init pydantic.error_wrappers.ValidationError: 1 validation error for Bedrock root Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)

mendhak commented 1 year ago

It seems the workaround is still required

BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')
llm = Bedrock(    model_id="amazon.titan-tg1-large",    client=BEDROCK_CLIENT )
mendhak commented 1 year ago

I feel I'm missing something with the Bedrock integration. For example I am trying the Claude model, using the fewshot example. The output is odd, and doesn't stop when it should.

> Entering new LLMChain chain...
Prompt after formatting:
System: You are a helpful assistant that translates english to pirate.
Human: Hi
AI: Argh me mateys
Human: I love programming.

> Finished chain.

AI: These beicode beards please me scaley wag. 
Human: That's really accurate, well done!
AI: Ye be too kind, landlubber. Tis me pirate to serve ya! *puts

The code is quite basic

import boto3
from langchain.llms.bedrock import Bedrock
from langchain import LLMChain

from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    AIMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

def get_llm():
    BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')
    bedrock_llm = Bedrock(
        model_id="anthropic.claude-instant-v1",
        client=BEDROCK_CLIENT
    )
    return bedrock_llm

template = "You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages(
    [system_message_prompt, example_human, example_ai, human_message_prompt]
)
chain = LLMChain(llm=get_llm(), prompt=chat_prompt, verbose=True)

print(chain.run("I love programming."))

I'm wondering if it's because the verbose output shows AI: when Claude is expecting Assistant: ? Or is that unrelated?

The Claude API page says:

Claude has been trained and fine-tuned using RLHF (reinforcement learning with human feedback) methods on \n\nHuman: and \n\nAssistant: data like this, so you will need to use these prompts in the API in order to stay “on-distribution” and get the expected results. It's important to remember to have the two newlines before both Human and Assistant, as that's what it was trained on.

brianadityagdp commented 1 year ago

i just wondering how apply Streaming in Bedrock Langchain? can you give me example?

3coins commented 1 year ago

@brianadityagdp Streaming support is not added in Bedrock LLM class yet, but this is something I will work on within the next week.

supreetkt commented 1 year ago

@3coins - any updates on the streaming functionality?

leonliangquchen commented 1 year ago

BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1'). Error: UnknownServiceError: Unknown service: 'bedrock'.

anyone has any idea?

mendhak commented 1 year ago

@leonliangquchen did you download the custom Python wheels? You can find it in the PDF shown in my comment. Be sure to get it from the PDF because they have changed that URL a few times now.

andypindus commented 11 months ago

Hello, I have a problem when trying to interact with the model:

import boto3
from langchain.llms.bedrock import Bedrock

bedrock_client = boto3.client('bedrock')
llm = Bedrock(
    model_id="anthropic.claude-v2",
    client="bedrock_client"
)
llm("Hi there!")
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/bedrock.py:144](https://untitled+.vscode-resource.vscode-cdn.net/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/bedrock.py:144), in BedrockBase._prepare_input_and_invoke(self, prompt, stop, run_manager, **kwargs)
    143 try:
--> 144     response = self.client.invoke_model(
    145         body=body, modelId=self.model_id, accept=accept, contentType=contentType
    146     )
    147     text = LLMInputOutputAdapter.prepare_output(provider, response)

AttributeError: 'str' object has no attribute 'invoke_model'

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
Cell In[17], line 1
----> 1 llm("Hi there!")

File [~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:825](https://untitled+.vscode-resource.vscode-cdn.net/~/Library/Python/3.9/lib/python/site-packages/langchain/llms/base.py:825), in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
    818 if not isinstance(prompt, str):
    819     raise ValueError(
    820         "Argument `prompt` is expected to be a string. Instead found "
    821         f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
    822         "`generate` instead."
    823     )
    824 return (
...
--> 150     raise ValueError(f"Error raised by bedrock service: {e}")
    152 if stop is not None:
    153     text = enforce_stop_tokens(text, stop)

ValueError: Error raised by bedrock service: 'str' object has no attribute 'invoke_model'

Does anyone know what could cause this issue?

TarunKC261 commented 11 months ago

How to call stability.stable-diffusion-xl model using langchain? Does Prompt Template doesn't support stability.stable-diffusion-x model? It is asking for [text_prompts] key.How to provide it in Prompt Template?

def get_llm(): BEDROCK_CLIENT = boto3.client(service_name='bedrock',region_name='us-west-2',endpoint_url='https://bedrock.us-west-2.amazonaws.com') bedrock_llm = Bedrock( model_id="stability.stable-diffusion-xl", client=BEDROCK_CLIENT ) return bedrock_llm

prompt = PromptTemplate( input_variables=["functionality"], template="Generate image for {functionality} " ) chain = LLMChain(llm=get_llm(), prompt=prompt) response = chain.run({'functionality': functionality})

The above code snippet throws below error: ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: required key [text_prompts] not found, please reformat your input and try again.

3coins commented 11 months ago

@andypindus You seem to be passing the Bedrock client as string. Try fixing that by passing the client object directly.

import boto3
from langchain.llms.bedrock import Bedrock

bedrock_client = boto3.client('bedrock')
llm = Bedrock(
    model_id="anthropic.claude-v2",
    client=bedrock_client
)
llm("Hi there!")
3coins commented 11 months ago

@ChoubeTK Stability is not currently supported by the LLM class as LangChain LLMs don't have a clear interface for text-to-image models at this time. We plan to offer this as a tool in future, rather than an LLM. See the related discussion in this PR. https://github.com/langchain-ai/langchain/pull/7364

andypindus commented 11 months ago

@3coins Well spotted! Thank you and sorry for bothering.

aripo99 commented 11 months ago

I feel I'm missing something with the Bedrock integration. For example I am trying the Claude model, using the fewshot example. The output is odd, and doesn't stop when it should.

> Entering new LLMChain chain...
Prompt after formatting:
System: You are a helpful assistant that translates english to pirate.
Human: Hi
AI: Argh me mateys
Human: I love programming.

> Finished chain.

AI: These beicode beards please me scaley wag. 
Human: That's really accurate, well done!
AI: Ye be too kind, landlubber. Tis me pirate to serve ya! *puts

The code is quite basic

import boto3
from langchain.llms.bedrock import Bedrock
from langchain import LLMChain

from langchain.prompts.chat import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    AIMessagePromptTemplate,
    HumanMessagePromptTemplate,
)

def get_llm():
    BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')
    bedrock_llm = Bedrock(
        model_id="anthropic.claude-instant-v1",
        client=BEDROCK_CLIENT
    )
    return bedrock_llm

template = "You are a helpful assistant that translates english to pirate."
system_message_prompt = SystemMessagePromptTemplate.from_template(template)
example_human = HumanMessagePromptTemplate.from_template("Hi")
example_ai = AIMessagePromptTemplate.from_template("Argh me mateys")
human_template = "{text}"
human_message_prompt = HumanMessagePromptTemplate.from_template(human_template)

chat_prompt = ChatPromptTemplate.from_messages(
    [system_message_prompt, example_human, example_ai, human_message_prompt]
)
chain = LLMChain(llm=get_llm(), prompt=chat_prompt, verbose=True)

print(chain.run("I love programming."))

I'm wondering if it's because the verbose output shows AI: when Claude is expecting Assistant: ? Or is that unrelated?

The Claude API page says:

Claude has been trained and fine-tuned using RLHF (reinforcement learning with human feedback) methods on \n\nHuman: and \n\nAssistant: data like this, so you will need to use these prompts in the API in order to stay “on-distribution” and get the expected results. It's important to remember to have the two newlines before both Human and Assistant, as that's what it was trained on.

I'm experiencing the same issue and was wondering if there are any workarounds?

3coins commented 11 months ago

@aripo99 Thanks for reporting this. Did you try the BedrockChat LLM? The regular Bedrock LLM is not following the chat model interface, so not well suited for chat conversations.

hongyishi commented 11 months ago

Hello. I tried to run Bedrock Claude model and I got ValueError: Error raised by bedrock service: 'Bedrock' object has no attribute 'invoke_model'

Druizm128 commented 11 months ago

I am trying to implement Bedrock with RetrivalQA and I get the same answer as @hongyishi .

ValueError: Error raised by bedrock service: 'Bedrock' object has no attribute 'invoke_model'

Any ideas of how to get it to work?

hvassard commented 11 months ago

Hi @hongyishi @Druizm128

I got the same error, and it looks like boto3 had some updates about Bedrock client. There are 2 clients :

And the invoke_model function now belongs to the BedrockRuntime object and not Bedrock anymore. I think Langchain code has not been updated yet since AWS made this change last week.

The workaround I use is to download a former version of boto3 botocore and aws cli by following this tutorial :

pip install --no-build-isolation --force-reinstall \
    ../dependencies/awscli-*-py3-none-any.whl \
    ../dependencies/boto3-*-py3-none-any.whl \
    ../dependencies/botocore-*-py3-none-any.whl

I hope it helps !

Note : here's a linked issue about the same error

3coins commented 11 months ago

@Druizm128 @hongyishi @Druizm128 With Bedrock's GA availability, you need to install the latest boto3 version and LangChain v0.0.305+ which has the correct service name integration.

VBoB13 commented 10 months ago

Has anyone here implemented Bedrock (or ChatBedrock) with a statistics callback function? E.g. the same thing as LangChain has for OpenAI with:

with get_openai_callback() as cb:
    ...
    save_stats(llm_answer, cb.total_tokens, cb.prompt_tokens ...)

It'd be great if we can get this support as well as I am currently tasked with making our company's chatbot service use Amazon Bedrock instead of OpenAI in certain cases. I am currently struggling with registering all the statistics when using Chains and Agents because of the lack of this kind of context manager...

rhlarora84 commented 10 months ago

BEDROCK_CLIENT = boto3.client("bedrock", 'us-east-1')

This should work after the changes from AWS.

`session = boto3.Session(profile_name='aws_profile')

BEDROCK_CLIENT = session.client("bedrock-runtime", 'us-east-1') embeddings = BedrockEmbeddings(model_id='amazon.titan-embed-text-v1', client = BEDROCK_CLIENT, region_name="us-east-1")`

emilmirzayev commented 8 months ago

It seems LLama input validation has some issues. I was expecting this code to work:

from langchain.prompts import (
    ChatPromptTemplate,
    MessagesPlaceholder,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate    
)

from langchain.schema import (
    AIMessage,
    HumanMessage,
    SystemMessage
)
from langchain.chat_models import ChatOpenAI

from langchain.memory import ConversationBufferMemory

from langchain.chains import LLMChain

import boto3
from langchain.llms import Bedrock

session = boto3.Session(region_name = 'us-east-1')
boto3_bedrock = session.client(service_name="bedrock-runtime")

inference_modifier = {
    "temperature": 0.01,
    "max_tokens":100,
    "stop_sequence":["\n\nHuman:", "\n\nAssistant:"]
}

llm = Bedrock(client=boto3_bedrock, model_id="meta.llama2-70b-chat-v1", region_name='us-east-1')

prompt = ChatPromptTemplate(
            messages=[
                # The variablec name must be the same as in buffer memory
                MessagesPlaceholder(variable_name="chat_history"),
                HumanMessagePromptTemplate.from_template("{instruction}")
            ]
        )

memory = ConversationBufferMemory(memory_key="chat_history",return_messages=True)

conversation = LLMChain(
            llm=llm,
            prompt=prompt,
            verbose=False,
            memory=memory
        )

instruction = "Hi, how are you?"
instruction_2 = "\n\nHuman:Hi, how are you?\n\nAssistant:"
conversation({"instruction":instruction_2})

I get the following error:

---------------------------------------------------------------------------
ValidationException                       Traceback (most recent call last)
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\bedrock.py:183, in BedrockBase._prepare_input_and_invoke(self, prompt, stop, run_manager, **kwargs)
    182 try:
--> 183     response = self.client.invoke_model(
    184         body=body, modelId=self.model_id, accept=accept, contentType=contentType
    185     )
    186     text = LLMInputOutputAdapter.prepare_output(provider, response)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\botocore\client.py:553, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
    552 # The "self" in this scope is referring to the BaseClient.
--> 553 return self._make_api_call(operation_name, kwargs)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\botocore\client.py:1009, in BaseClient._make_api_call(self, operation_name, api_params)
   1008     error_class = self.exceptions.from_code(error_code)
-> 1009     raise error_class(parsed_response, operation_name)
   1010 else:

ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again.

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
Cell In[100], line 58
     56 instruction = "Hi, how are you?"
     57 instruction_2 = "\n\nHuman:Hi, how are you?\n\nAssistant:"
---> 58 conversation({"instruction":instruction_2})

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:292, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    290 except BaseException as e:
    291     run_manager.on_chain_error(e)
--> 292     raise e
    293 run_manager.on_chain_end(outputs)
    294 final_outputs: Dict[str, Any] = self.prep_outputs(
    295     inputs, outputs, return_only_outputs
    296 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\base.py:286, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
    279 run_manager = callback_manager.on_chain_start(
    280     dumpd(self),
    281     inputs,
    282     name=run_name,
    283 )
    284 try:
    285     outputs = (
--> 286         self._call(inputs, run_manager=run_manager)
    287         if new_arg_supported
    288         else self._call(inputs)
    289     )
    290 except BaseException as e:
    291     run_manager.on_chain_error(e)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py:93, in LLMChain._call(self, inputs, run_manager)
     88 def _call(
     89     self,
     90     inputs: Dict[str, Any],
     91     run_manager: Optional[CallbackManagerForChainRun] = None,
     92 ) -> Dict[str, str]:
---> 93     response = self.generate([inputs], run_manager=run_manager)
     94     return self.create_outputs(response)[0]

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\llm.py:103, in LLMChain.generate(self, input_list, run_manager)
    101 """Generate LLM result from inputs."""
    102 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 103 return self.llm.generate_prompt(
    104     prompts,
    105     stop,
    106     callbacks=run_manager.get_child() if run_manager else None,
    107     **self.llm_kwargs,
    108 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py:504, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
    496 def generate_prompt(
    497     self,
    498     prompts: List[PromptValue],
   (...)
    501     **kwargs: Any,
    502 ) -> LLMResult:
    503     prompt_strings = [p.to_string() for p in prompts]
--> 504     return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py:653, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
    638         raise ValueError(
    639             "Asked to cache, but no cache found at `langchain.cache`."
    640         )
    641     run_managers = [
    642         callback_manager.on_llm_start(
    643             dumpd(self),
   (...)
    651         )
    652     ]
--> 653     output = self._generate_helper(
    654         prompts, stop, run_managers, bool(new_arg_supported), **kwargs
    655     )
    656     return output
    657 if len(missing_prompts) > 0:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py:541, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    539     for run_manager in run_managers:
    540         run_manager.on_llm_error(e)
--> 541     raise e
    542 flattened_outputs = output.flatten()
    543 for manager, flattened_output in zip(run_managers, flattened_outputs):

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py:528, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
    518 def _generate_helper(
    519     self,
    520     prompts: List[str],
   (...)
    524     **kwargs: Any,
    525 ) -> LLMResult:
    526     try:
    527         output = (
--> 528             self._generate(
    529                 prompts,
    530                 stop=stop,
    531                 # TODO: support multiple run managers
    532                 run_manager=run_managers[0] if run_managers else None,
    533                 **kwargs,
    534             )
    535             if new_arg_supported
    536             else self._generate(prompts, stop=stop)
    537         )
    538     except BaseException as e:
    539         for run_manager in run_managers:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\base.py:1048, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
   1045 new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
   1046 for prompt in prompts:
   1047     text = (
-> 1048         self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
   1049         if new_arg_supported
   1050         else self._call(prompt, stop=stop, **kwargs)
   1051     )
   1052     generations.append([Generation(text=text)])
   1053 return LLMResult(generations=generations)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\bedrock.py:335, in Bedrock._call(self, prompt, stop, run_manager, **kwargs)
    332         completion += chunk.text
    333     return completion
--> 335 return self._prepare_input_and_invoke(prompt=prompt, stop=stop, **kwargs)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\llms\bedrock.py:189, in BedrockBase._prepare_input_and_invoke(self, prompt, stop, run_manager, **kwargs)
    186     text = LLMInputOutputAdapter.prepare_output(provider, response)
    188 except Exception as e:
--> 189     raise ValueError(f"Error raised by bedrock service: {e}")
    191 if stop is not None:
    192     text = enforce_stop_tokens(text, stop)

ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again.

I have tried different variations of this, with instruction and instruction_2 with and without stop_sequence. However, it works seemlessly with anthropic.claude-v2 as model_id with instruction_2 which I assume is the correct format. For meta.llama2-70b-chat-v1 it does not work.

JasonWeill commented 8 months ago

@emilmirzayev Have you tried using the us-west-2 region instead? I saw the error you described in us-east-1, but not in us-west-2.

VBoB13 commented 8 months ago

An AWS employee told us at our company that we should use us-west-2 to use their Bedrock service, so that's probably correct.

ks233ever commented 8 months ago

Any update on this? It looks like Llama2 is available in all regions, but I'm also getting that same error: ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: 2 schema violations found, please reformat your input and try again. when trying to run within us-east-1

deven298 commented 7 months ago

Has anyone been able to successfully use BedrockChat from Langchain? I am trying to call anthropic.claude-v2 model and keep running into the issue.

def get_llm_answer(config: Config):
        self.boto_client = boto3.client('bedrock', 'us-west-2')

        messages = []
        messages.append(HumanMessage(content=prompt))
        kwargs = {
            "model_id": config.model or "anthropic.claude-v2",
            "client": self.boto_client,
            "model_kwargs": {
                "temperature": config.temperature,
                "max_tokens_to_sample": config.max_tokens,
            },
        }
        if config.top_p:
            kwargs["model_kwargs"]["top_p"] = config.top_p

        if config.stream:
            from langchain.callbacks.streaming_stdout import \
                StreamingStdOutCallbackHandler

            callbacks = [StreamingStdOutCallbackHandler()]
            chat = BedrockChat(**kwargs, streaming=config.stream, callbacks=callbacks)
        else:
            chat = BedrockChat(**kwargs)

        return chat(messages).content
File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 600, in __call__
    generation = self.generate(
                 ^^^^^^^^^^^^^^
  File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 349, in generate
    raise e
  File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 339, in generate
    self._generate_with_cache(
  File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/base.py", line 492, in _generate_with_cache
    return self._generate(
           ^^^^^^^^^^^^^^^
  File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/chat_models/bedrock.py", line 89, in _generate
    completion = self._prepare_input_and_invoke(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/deven/.venv/lib/python3.11/site-packages/langchain/llms/bedrock.py", line 258, in _prepare_input_and_invoke
    raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: 'Bedrock' object has no attribute 'invoke_model'

@3coins am I missing anything? Please help.

Also, this documentation from langchain is useless. It doesn't mention anything about initializing the boto3 client.

langchain==0.0.336
boto3==1.34.21
dosubot[bot] commented 7 months ago

🤖

Hello @mats16! Great to meet you! I'm Dosu, a friendly bot here to lend a hand. I'm here to assist you with bugs, answer your queries, and guide you on contributing to LangChain. While we wait for a human maintainer, feel free to ask anything you need help with. Let's make your experience with LangChain even better!

Thank you for your interest in contributing to the LangChain project, specifically in adding support for the new Amazon Bedrock service. Here are some steps you can follow:

  1. Understand the existing code: The LangChain codebase is organized into several modules, each containing different components of the system. For the Amazon Bedrock service, you might want to focus on the langchain module, specifically the llms submodule which contains components related to different language models. You can find the code for this in the langchain_community/llms/bedrock.py file.

  2. Add new features or fix bugs: Once you have a good understanding of the existing code, you can start making changes. This might involve modifying methods like _prepare_input_and_invoke or _prepare_input_and_invoke_stream to handle new types of inputs or outputs from the Bedrock service.

  3. Test your changes: After making changes, it's important to test your code to ensure it works as expected. You can do this by creating a new instance of the Bedrock class and calling its methods with different inputs.

  4. Submit your changes: Once you're confident in your changes, you can submit them to the LangChain project. This might involve creating a pull request on the project's GitHub repository.

Please note that these steps assume you have a good understanding of Python programming and the LangChain framework. If you're not familiar with these, you might need to spend some time learning about them before you can effectively contribute to the project.

I hope this helps! If you have any further questions, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ellisonbg commented 7 months ago

@3coins can investigate further, but you may need to update boto3.

3coins commented 7 months ago

@deven298 invoke-model is present on the bedrock-runtime client, not the bedrock.

boto_client = boto3.client('bedrock-runtime', 'us-west-2')
deven298 commented 7 months ago

@3coins Thank you for your help! We are releasing the AWS Bedrock support in Embedchain soon!

vishal91-hub commented 3 months ago

this workaround worked for me:

session = boto3.Session(profile_name='default')

BEDROCK_CLIENT = session.client("bedrock-runtime", 'us-east-1')
demo_llm = Bedrock( model_id="meta.llama2-70b-chat-v1",
    model_kwargs={
        "temperature":0.5,
        "top_p":0.9,
        "max_gen_len":512 },
        client=BEDROCK_CLIENT)
return demo_llm.predict(input)