Open yellowcap opened 4 days ago
Som insight on getting geojson from tools: this is a lot of tokens for llms, but its hard to shield the llm from tool output.
It's not possible to update state from within a tool. We need to find a way to pass larger amounts of data from tools to the frontend without giving it to the LLM for interpretation. We probably want to pass part of the data as-is to the frontend, and part to the LLM for interpretation.
We can inject the state into the tool using InjectedState , but not update the state (or at least its highly discouraged).
For instance, pass disturbance statistics data to LLM, but send vectorized disturbances directly to frontend. I have not figured out how to solve that, and the vectorized geojson generates quite a lot of tokens.
Related discussions
https://github.com/langchain-ai/langgraph/discussions/1616 https://github.com/langchain-ai/langgraph/discussions/337
Ok this was a bit hard to figure out, but there is a simple solution to this actually! 🫏
The @tool
decorator has an additional argument: response_format (Literal['content', 'content_and_artifact'])
.
If this is is et to content_and_artifact
, any kind of data can be passed to downstream as artifacts, and they will not be passed to the model. We can leverage this to feed data to the user.
An agent to feed the first alerts workflow. The agent needs to
Based on LCL layers on GEE.