-
Hello, major kudos on your continued work on this project -- from the CodeStream's to the recent FieldSHIFT-2: Fully Synthetic Dissertations for All-by-All Domains stream, it is only becoming more and…
-
### Motivation
For current large model inference, KV cache occupies a significant portion of GPU memory, so reducing the size of KV cache is an important direction for improvement. Recently, severa…
-
**Describe the bug**
Trying to use an Azure API Key to run a LLM evaluation using UpTrain. I received a 404 error message saying that the deployment is not found. However, there is no deployment name…
-
### Description
When running the following code calling generate method using different models (e.g., Mistral-7B-Instruct-v0.2 and meta-llama-3-8B):
```
from transformers import AutoModelForCausal…
-
As discussed on Discord, we need to know what prompts you are serving the evaluation LLM.
https://hamel.dev/blog/posts/prompt/
I need to see the prompt to help debug when the framework fails or …
-
**Describe the bug**
Can't embed images in prompts with llm like gpt-4o, so LLM also can't answer well.
* jinja2 file
```jinja
system:
You are a helpful assistant.
user:
what is this?
…
-
add a section about testing llms, this is crucial
-
I'd like to request creating a new approver group semconv-llm-approvers and add the following approvers to that group:
@drewby
@nirga
@cartermp
This group would be the code-owner for spans an…
-
This is quite a bit of coding, but I was playing with AnythingLLM and Olama and noted how both evolve towards each other.
And also how Meta / Olama is becoming an opensource king, making new standard…
-
Adding Local LLMs as a Model Option.
React framework: https://github.com/r2d4/react-llm