explodinggradients / ragas

Supercharge Your LLM Application Evaluations 🚀
https://docs.ragas.io
Apache License 2.0
7.08k stars 719 forks source link

[R-304] support more providers for `is_finished()` logic #1548

Open jjmachan opened 1 week ago

jjmachan commented 1 week ago

Suggest improvements to the parser for figuring out if the model finish reasons so that the default parser can be improved

you can also define your own custom parser and parse it too

R-304

ahgraber commented 1 week ago

Llama models hosted on TogetherAI use {"finish_reason": "eos"}

{
  "content": "The letter 'r' appears twice in the word 'strawberry'.",
  "additional_kwargs": {
    "refusal": null
  },
  "response_metadata": {
    "token_usage": {
      "completion_tokens": 16,
      "prompt_tokens": 54,
      "total_tokens": 70,
      "completion_tokens_details": null,
      "prompt_tokens_details": null
    },
    "model_name": "meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo",
    "system_fingerprint": null,
    "finish_reason": "eos",
    "logprobs": null
  },
  "type": "ai",
  "name": null,
  "id": "run-16ae060b-4bc2-4624-b07e-dce650033d6c-0",
  "example": false,
  "tool_calls": [],
  "invalid_tool_calls": [],
  "usage_metadata": {
    "input_tokens": 54,
    "output_tokens": 16,
    "total_tokens": 70,
    "input_token_details": {},
    "output_token_details": {}
  }
}
jjmachan commented 2 days ago

hey @ahgraber thanks for reporting this as always 🙂 I'll add it to that