-
**Describe the bug**
As a first timer, I tried the openai instrumentation, and sent a trace to a local collector (using ollama as the backend). Then I compared the output with [llm semantics defined …
-
Based on the Semantic Conventions 1.27.0. See [docs](https://opentelemetry.io/docs/specs/semconv/gen-ai/).
This issue is to keep track of the work I've been doing for the Spring AI initial adoption o…
-
**Describe the bug**
should be able to chain.invoke via the OpenAI callable in langchain and a base url pointing at a guardrails server but it errors on role of type None
**To Reproduce**
Steps t…
-
### Feature request
This is a Bert based model however when trying to run, the message says model not supported. https://huggingface.co/meta-llama/Prompt-Guard-86M/tree/main
### Motivation
LLM-pow…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
Hello everyone,
I've developed two RAG (Naive RAG and Advance RAG) applications using…
-
**Describe the bug**
Error during asking to generate tests for the external line.
```
java.lang.NullPointerException
at org.jetbrains.research.testspark.java.JavaPsiHelper.collectClassesToTest(…
-
**Describe the bug**
Context chat queries are failing today with a 500 internal server error in the context_chat_backend docker container.
Could this be because the Docker container was restarted …
-
I found that in the benchmark/suite has the output time to first token. However, when I run `python benchmark.py --model meta-llama/Llama-2-7b-hf static --isl 128 --osl 128 --batch 1` an error occurs:…
-
**What would you like to be added/modified**:
Research benchmarks for evaluating LLM and LLM Agent
Develop a personalized LLM Agent using lifelong learning on the KubeEdge-lanvs edge-cloud colla…
-
Very interesting application of LLMs in the domain of Causal Inference!
I would like to replicate your results using your code, (not using GPT4 as LLM, but one or more of the open LLMs from GROQ, s…