run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.8k stars 5.07k forks source link

[Feature Request]: Support for Asynchronous Processing in llms.watsonx #16124

Open mdielacher opened 1 week ago

mdielacher commented 1 week ago

Feature Description

Currently, llms.watsonx does not support asynchronous processing, which is crucial for the efficient operation of evaluators such as the FaithfulnessEvaluator and RelevancyEvaluator. In comparison llms.ollama/llms.openai do support asynchronous processing, enabling smoother and faster evaluation within pipelines.

Reason

The classes:

currently marked as NotImplemented for llms.watsonx.

Value of Feature

These evaluators are essential for ensuring the quality of responses.

beko-dt commented 1 week ago

+1 Would be valuable to have this!