Closed JPonsa closed 1 month ago
hey @JPonsa sorry about this
but what models are you using? by default we use LLMs and Embeddings which are openai - if seems like you have some transformers models
hi @jjmachan using vLLM. Now I don't remember the LLM, Either mistral7b or llama2-7b
did you get this fixed @JPonsa ?
but I'll test it with vLLM shortly and write a testcase for that too
@jjmachan , not sure, probably, I cannot test due to https://github.com/explodinggradients/ragas/issues/871
ohh understood, I'll prioritize that one first then, get that sorted for u 🙂
I got the error below while running the generator.generate_with_langchain_docs. How to set it up to use GPUs?