Closed robuno closed 3 months ago
Hey @robuno I am not sure if Zyphyr model is good enough to produce valid JSON outputs.
Hey @shahules786, thank you for your response. I choose Zephyr model since it is shown in Ragas docs and one issue that it works.
I also checked zephyr-7b-alpha works well yesterday:
Today, I tried a few models "google/gemma-2b" and "HuggingFaceH4/zephyr-7b-beta" but they gave this JSON format error either. I'm actually interested in finding a way to use small HFace models in the evaluation process but could not find suitable models for my purpose.
Thank you for your interest, I'm open for the ideas.
Same issue. Any updates on this bug ?
[ ] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug When I want to measure the success of the Embedding (BAAI/bge-large-en) and model (zephyr-7b-alpha) model that I added via Huggingface with the Ragas evaluator, I get the following invalid response and verdict error. The interesting thing is that these 3 metrics were working 16-17 hours ago with version of 0.1.4.
Ragas version: 0.1.5 Python version: 3.10.12
Code to Reproduce
My evaluate function call:
Embedding:
LLM initialization:
Error trace
Expected behavior I was expecting it works without any response format error. It worked once in a way I desire.
Additional context How my dataset looks like:
Langchain Version: 0.1.13