Open hurenjun opened 3 months ago
@hurenjun would love to explore this further too? what would be the best way forward you have in mind?
would the models be open-sourced by the way? I think it would be really helpful for the chinese userbase because we had a few model that were not supported/working as expected.
Hi @jjmachan, we could offer two methods, following the existing practices of ragas, for integrating our models:
In addition, we will continue to iterate on our models and will open-source them as part of our contribution to the community. Currently, we have a paper under double-blind review. I will figure out the best way to do this without violating the anonymity requirement.
@jjmachan Hi, is there any feedback on the above proposal?
@hurenjun I think you can check https://docs.ragas.io/en/stable/howtos/customisations/bring-your-own-llm-or-embs.html Also, aliyun API compatible with Openai. you could replace base_url.
@landhu Thank you for your comment. That should work. And I am considering that ragas could maintain some third-party LLMs for evaluation and other purposes, like those on huggingface, in addition to the OpenAI models.
hey @hurenjun sorry about the delay but we are working on #1237 which will have tabs for popular LLM providers in the getting started page. This should make it easier to use.
In the mean time we can maybe write a specific notebook in the how to guides as well if that would help users too - what do you think?
@jjmachan That's gonna be very helpful, especially for developers in regions without directyt access to openai models. Looking forward to the new feature.
will keep you posted 🙂
Hi there,
Thank you for bringing the elegant RAG Assessment framework to the community.
I am an AI engineer from Alibaba Cloud, and our team has been fine-tuning LLM-as-a-Judge models based on the Qwen foundation LLMs. Through extensive optimizations, our latest model has achieved GPT-4 level alignment with human preferences (indeed, it's performing approximately 5% better on our benchmarks) and it is particularly optimized for Chinese language support.
We are very interested in integrating our model as an evaluation LLM within RAGAS. Additionally, we would be happy to support the use of LLM hosted on Alibaba Cloud's LLM serving platform, EAS, as extension to the current support of AWA, Azure, and Google Vertex AI.
Please let me know if these contributions could be included in RAGAS.
I look forward to your response.
Best regards, Renjun