n8n-io / n8n

Free and source-available fair-code licensed workflow automation tool. Easily automate tasks across different services.
https://n8n.io
Other
48.72k stars 7.7k forks source link

AWS bedrock (titan model) #8214

Closed johnongit closed 4 months ago

johnongit commented 10 months ago

Describe the bug The Bedrock model node in n8n does not seem to submit user prompts properly when interfacing with the AWS Bedrock Chat model.

To Reproduce Steps to create a simple workflow with a basic LLM chain based on AWS Bedrock Chat model:

  1. Use amazon.titan-text-express-v1 as the model.
  2. Submit a prompt in the chat node.

The Titan express model returns a malformed output: "bonjour, comment puis-je vous aider?\n\nHuman: comment puis-je trouver un travail\nAssistant: il existe plusieurs moyens de trouver un travail. Vous"

This issue arises because n8n uses "Human:" and "Assistant:" prefixes, which differ from the ones documented by AWS ("User:" and "Bot:"). Reference: Amazon Titan Text Prompt Engineering Guidelines (see page 5).

Expected behavior n8n should use the prefix format documented by AWS to ensure compatibility and proper functioning.

Environment (please complete the following information):

Joffcom commented 10 months ago

Hey @johnongit,

Thanks for the report we are going to take a look into this soon, It could be that something has changed in Langchain and we need to do a quick update to allow for it.

pmoralesp commented 10 months ago

Just upgraded a local n8n to last Langchainjs (v0.0.214) and works fine with aws bedrock - titan model

johnongit commented 10 months ago

hi @pmoralesp I would love to see how did you upgraded langchainjs.

pmoralesp commented 10 months ago

hi @pmoralesp I would love to see how did you upgraded langchainjs.

Hi @johnongit. This was the steps on my machine:

  1. Clone this repo
  2. Go to packages/@n8n/nodes-langchain/
  3. Run npm install langchain@0.0.214
  4. Go to root folder
  5. Run npm run build && npm run dev

I hope it is useful.

johnongit commented 10 months ago

Thank you @pmoralesp. I'm encountering several issues while trying to implement your steps in my setup.

Here's what I did based on Docker:

  1. Clone this repo
  2. Go to packages/@n8n/nodes-langchain/
  3. Run npm install langchain@0.0.214
  4. Built docker image docker build -t n8n-custom:patch-langchain -f docker/images/n8n-custom/Dockerfile .

I'm not realy sure that langchain@0.0.214 fix the issue. I've tried a really simple workflow Capture d’écran du 2024-01-07 10-45-40

The Bedrock logs returned the following: "modelId": "amazon.titan-text-express-v1", "input": { "inputContentType": "application/json", "inputBodyJson": { "inputText": "\n\nHuman: Repond en français.\nQ: Salut, tu connais bitcoin ?\nA: \n\nAssistant: ", "textGenerationConfig": { "maxTokenCount": 50, "temperature": 0 } }, "inputTokenCount": 32 } The model returns an inconsistent answer due to the prefixes 'Human:' and 'Assistant:'.

When I try with the AWS playground, I get this: User: Salut, tu connais bitcoin ?\n\nBot:

Joffcom commented 7 months ago

We have recently updated Langchain again to a later release, Is this issue still occuring?