Open nachollorca opened 1 month ago
Taking into account tendency for other LLM providers/libraries to converge on OpenAI API - let's got with OpenAI naming scheme.
Alright, I'll go for OpenAI's convention :)
Been having a look at all generators and all the keywords seem to keep the OpenAI standard.
Hi @nachollorca I am the product manager of Haystack and I like to reach out to users to better understand their use cases and see how we can offer better support. Would you have some 15 min in the next few weeks for a quick chat? https://calendly.com/maria-mestre-ugu/haystack-catch-up
@CarlosFerLo
Been having a look at all generators and all the keywords seem to keep the OpenAI standard.
At least Cohere, Anthropic and AmazonBedrock use 'meta': {'model': 'claude-2.1', 'index': 0, 'finish_reason': 'end_turn', 'usage': {'input_tokens': 18, 'output_tokens': 58}
instead of prompt_tokens
, completion_tokens
@nachollorca Hey these Generators are in haystack-extensions
I believe, maybe we should move this issue there.
Hey @nachollorca @CarlosFerLo - most of the generators are in https://github.com/deepset-ai/haystack-core-integrations/ GitHub project
Is your feature request related to a problem? Please describe. The dictionaries returned by generators have matching keys to an extent, but some are using different notations for the same concept, mainly within the metadata / usage keys. For example:
Describe the solution you'd like I would like generators to output the same scheme and keys to the extent that it is possible, i.e. for matching concepts. In particular, I would prefer to use input_tokens/output_tokens and stop_reason.
Describe alternatives you've considered So far, in apps where we swap generators depending on the use case, the alternative is: