Open MasciocchiReply opened 1 month ago
I noticed that with 0.1.4, llama3 was added with the referenced PR #32, I have been experimenting with react agents, and I get the error about Stop sequence key name for meta is not supported.
from langchain import hub
from langchain.agents import AgentExecutor, create_react_agent, create_json_chat_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_aws import ChatBedrock
from langchain_core.messages import AIMessage, HumanMessage
tools = [TavilySearchResults(max_results=1)]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/react")
# Choose the LLM to use
sonnet_id = "anthropic.claude-3-sonnet-20240229-v1:0"
llama3_70_id = "meta.llama3-70b-instruct-v1:0"
llm = ChatBedrock(model_id=llama3_70_id)
# Construct the ReAct agent
agent = create_react_agent(llm, tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "what is LangChain?"})
I can't say I'm an expert in poetry but I tried using this PR with poetry add git+https://github.com/MasciocchiReply/langchain-aws.git
but I got an error with that otherwise I would test out the PR
Hi @ToyVo, what kind error did you get? Can you paste the log of the error?
The error I get upon attempting to add the git repository as a dependency is
Unable to determine package info for path: /Users/CollinDie/FQ/artemis-app-demo/server/.venv/src/langchain-aws
Command ['/var/folders/c6/p9t0vp0x5119lkclkkg79pqc0000gq/T/tmpoupzvubl/.venv/bin/python', '-I', '-W', 'ignore', '-c', "import build\nimport build.env\nimport pyproject_hooks\n\nsource = '/Users/CollinDie/FQ/artemis-app-demo/server/.venv/src/langchain-aws'\ndest = '/var/folders/c6/p9t0vp0x5119lkclkkg79pqc0000gq/T/tmpoupzvubl/dist'\n\nwith build.env.DefaultIsolatedEnv() as env:\n builder = build.ProjectBuilder.from_isolated_env(\n env, source, runner=pyproject_hooks.quiet_subprocess_runner\n )\n env.install(builder.build_system_requires)\n env.install(builder.get_requires_for_build('wheel'))\n builder.metadata_path(dest)\n"] errored with the following return code 1
Error output:
Traceback (most recent call last):
File "<string>", line 9, in <module>
File "/var/folders/c6/p9t0vp0x5119lkclkkg79pqc0000gq/T/tmpoupzvubl/.venv/lib/python3.11/site-packages/build/__init__.py", line 199, in from_isolated_env
return cls(
^^^^
File "/var/folders/c6/p9t0vp0x5119lkclkkg79pqc0000gq/T/tmpoupzvubl/.venv/lib/python3.11/site-packages/build/__init__.py", line 174, in __init__
_validate_source_directory(source_dir)
File "/var/folders/c6/p9t0vp0x5119lkclkkg79pqc0000gq/T/tmpoupzvubl/.venv/lib/python3.11/site-packages/build/__init__.py", line 77, in _validate_source_directory
raise BuildException(msg)
build._exceptions.BuildException: Source /Users/CollinDie/FQ/artemis-app-demo/server/.venv/src/langchain-aws does not appear to be a Python project: no pyproject.toml or setup.py
No fallback setup.py file was found to generate egg_info.
I have since looked at the github actions defined and seen how a release is made, I have not yet tried to build the branch in that way https://github.com/langchain-ai/langchain-aws/blob/main/.github/workflows/_release.yml#L33-L54
I have now built this branch and manually moved the files into my .venv
So its mostly working, I'm not sure whats up with the output parser but the turns work this time.
{'input': 'what is LangChain?',
'output': 'Agent stopped due to iteration limit or time limit.'}
Hi @ToyVo , I had the same error "Parsing LLM output produced both a final answer and a parse-able action". In my case, it was due to the prompt for the output format. I was using your same output format: Action, Action Input, Observation,... It works fine with GPT-4 but not with Llama3, because It creates a loop ( probably due to the repetition of the entire output format in the Final Answer). To resolve the error I changed the prompt. The new prompt now says to use the "Action, Action Input, Observation,..." format only when the llm need to use a tool, otherwise to just responde with "Final Answer : ...".
I was trying to implement Llama 3 70b, with Bedrock. I imported the @fedor-intercom fixes. But I had some troubles, such as: "ValueError: Stop sequence key name for meta is not supported." or "required key [prompt] not found#: extraneous key [stopSequences] is not permitted".
So I inspected the bedrock.py file, and found out the provider "meta" was not handled correctly. I modified the handling of the provider "meta". With these modifications everything seems to works fine.
I hope this could be uselful.