-
The insight tool should probably integrate with other LLM providers and randomly select one (and tell which one it used) so that I can get a better feel for the flavor/performance of each (and choose …
-
How can we add custom questions and ground truth to the testset generated using Ragas TestSetGenerator:
```
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions impo…
-
i am creating a testset using my nodes with below codes:
```
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
generato…
-
### Affected component
llms/ShuttleAIToolModel.py
### Motivation
Our testing indicates changes in the ShuttleAIModel, which have surfaced JSON-related errors:
--
FAILED tests/llms/ShuttleAIModel_t…
-
CC @web-platform-tests/wpt-core-team
I was recently asked about the policy for using LLMs to generate tests that are submitted to wpt. Currently we don't have any explicit policy on this, but I th…
-
add a section about testing llms, this is crucial
-
### Description
We want to support Vercel AI (https://github.com/vercel/ai). There seems to be some OTEL instrumentation baked in - we need to test this, and ensure this follows the LLM conventions: …
mydea updated
15 hours ago
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
命令:CUDA_VISIBLE_DEVICES=0 swift infer --model_type qwen2-vl-7b-instruct --infer_backend vllm -…
-
### Description
If you download a gguf model and update the LLM URL settings to the proper port where kotaemon is loading the model, testing against the "ollama" LLM works.
However, the Embeddin…
-
Hi,
I've created a blank Rust project with a single dependency:
```toml
llama_cpp_rs = "0.3.0"
```
However, when I tried to compile it, the build failed with the following error:
```bash…