-
LLMBar metrics on newer models, esp. GPT-4o, GPT-4o-mini, Claude 3.5 sonnet, Llama-3.1-8B-Instruct, Llama-3.1-70B-Instruct, Llama-Guard-3-8B, Llama-Guard-3-8B-INT8, gemma-2-2b-it, gemma-2-9b-it, would…
-
## Bug Description
Configuration : llm_examples_main branch, current torch version : 2.4, transformers==4.41.2
Error message:
```py
File "/home/dperi/Downloads/TensorRT/examples/dynamo/torch_e…
-
I use a Transient menu for configuring options provided by gptel, an LLM interaction package. One request I get quite often from users [[1](https://github.com/karthink/gptel/issues/94), [2](https://g…
-
In the 0.5 release, `summarize.py` is used for summarization benchmark. However, in the latest 0.6.1 release, the `summarize.py` does not exist. I can only find the `summarize_long.py`.
Following …
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Hi, I'm using Groq() client in the llama_index.llms with the propertygraph feature…
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
We would like a system that can answer arbitrary human completable surveys using a ML model, e.g., GPT. We would like this to be rather straightforward for a user, e.g., provide a URL to the survey si…
-
### Bug Description
I am trying out the example specified in https://docs.llamaindex.ai/en/stable/examples/workflow/rag/ page.
Please find my code below
```
from llama_index.core.workflow import E…
-
Traceback is as followed, I was running ChatGLM4-9b-chat on my laptop.
Device configurations
OS: Win 11 23H2 (22631.3737)
- CPU: i7-1260P
- GPU: 'Intel(R) Iris(R) Xe Graphics', platform_name='Inte…
-
Sometimes issues are submitted but not finalized until the team contributes more ideas and research. This leads to the issue specification being out of sync with the latest information, leading to the…