Closed cpursley closed 3 months ago
@cpursley yes, the data extraction chain needs to be fixed up. I'm working through a big change right now that will make the data extraction process easier particularly for models that don't support tool calls (like Bumblebee self-hosted models).
Because every model is so unique in terms of what it can do and the prompts needed for a successful extraction, I'm less and less convinced that a standard one-size-fits-all chain can work well. It might be more of, "Here's a guide and a template for building your own data extraction process".
I'm hopefully finishing up the code changes now and will write more about this.
This also might be that I'm using Llama but via an OpenAI compatible API fwiw.
Sounds good, let me know how I can help.
This may still be an issue with the new version. I'm not sure. You'll have to give it a try and let me know what's happening.
This seems to have resolved itself with the new version. Closing.
A high percentage of the responses for the data extraction chain have been returning a list of tool_calls instead of a single result in the "info". This PR maps "info" when there are multiple tool calls. Not sure how to set up a test case for that but open to ideas.