Closed jaymon0703 closed 3 months ago
Can you share the OS info @jaymon0703 too? Are you facing this issue everytime?
Thank you @shahules786
Error manifests every time.
{'platform': 'Windows', 'platform-release': '10', 'platform-version': '10.0.19045', 'architecture': 'AMD64', 'processor': 'Intel64 Family 6 Model 142 Stepping 12, GenuineIntel', 'ram': '16 GB'}
I am also facing the same issue and i am using GPT models with python 3.10 on collab.
If anyone has VertexAI examples for testset generation and evaluation it would be great to see it. Currently getting errors on both, with above for testset generation.
I am also getting this same error. Any resolution? @jaymon0703
while using gpt use ragas==0.1.4 langchain==0.1.5 it worked for me
Sorry what is "GPT" you are referring to @HRUSHI1212?
Sorry what is "GPT" you are referring to @HRUSHI1212?
While generating synthetic dataset I am using GPT-4 and gpt 3.5 turbo which is the default one.with the versions ragas==0.1.4 langchain==0.1.5
Hey @shahules786 is it possible to update VertexAI docs with working code for generating testset? Thanks!
Hi there,
I am also facing the same error: "The runner thread which was running the jobs raised an exception."
When I use two different models for generator_llm
and critic_llm
and set test_size=2
, the code works. However, in other cases, such as using the same model for both generator_llm and critic_llm or setting test_size to 3, 5, or 10, the code breaks and generates the error.
was addressed with #1093 do check out the latest versions, it should be fixed
[X] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
test_generator.generate_with_langchain_docs()
returns RuntimeError.Please note i am using VertexAI models.
RuntimeError: Task <Task pending name='Task-13277' coro=<BaseChatModel._agenerate_with_cache() running at [c:\Users\MASKED\.venv\lib\site-packages\langchain_core\language_models\chat_models.py:617](file:///C:/Users/MASKED/.venv/lib/site-packages/langchain_core/language_models/chat_models.py:617)> cb=[gather.<locals>._done_callback() at [C:\Users\MASKED\AppData\Local\Programs\Python\Python310\lib\asyncio\tasks.py:718](file:///C:/Users/MASKED/AppData/Local/Programs/Python/Python310/lib/asyncio/tasks.py:718)]> got Future <Task pending name='Task-13278' coro=<UnaryUnaryCall._invoke() running at [c:\Users\MASKED\.venv\lib\site-packages\grpc\aio\_call.py:566](file:///C:/Users/MASKED/.venv/lib/site-packages/grpc/aio/_call.py:566)>> attached to a different loop
Ragas version: 0.1.3 Python version: 3.10.0
Code to Reproduce