-
I have a running llm model that connects to a self-hosted, OpenAi compatible server with a Llama3.1 model behind. I can make requests to the model with OpenSearch, further everything works when I use …
-
I use Ollama as my inference server for local LLMs. Ollama is supported by many LLM frameworks, but not Guidance.
Would love to see a direct integration with Ollama via the models package.
I'm awa…
-
There is no release for 3 months and just few commits recently, so will this project be actively maintained?
I tried serve using ray-llm with some LLM, and need to update transformers, install tikt…
-
Very interesting application of LLMs in the domain of Causal Inference!
I would like to replicate your results using your code, (not using GPT4 as LLM, but one or more of the open LLMs from GROQ, s…
-
### 请提出你的问题 Please ask your question
/home/aistudio/PaddleNLP/llm
Traceback (most recent call last):
File "/home/aistudio/PaddleNLP/llm/predictor.py", line 29, in
from paddle.base.framework…
-
We are currently not evaluation our recall process in the evaluation framework. Our recall process involves both ELSER and the LLM, and we should add some kind of test to see how well this process wor…
-
Hey
this is not a bug, rather a request for info
First off great project!
Secondly, you mention in README that i need an OpenAI key to leverage openbb agents.
Unfortunately i exhausted my credit…
-
### Area(s)
area:gen-ai, llm
### Is your change request related to a problem? Please describe.
Continuation of https://github.com/open-telemetry/semantic-conventions/issues/1007
To prevent…
-
Thanks for sharing your work. How can I use the pretrained network for a downstream task such as NER? I am a beginner to LLMs and NVIDIA LLM frameworks. Would appreciate any help. Thanks!
-
**LocalAI version:**
v2.4.1
**Environment, CPU architecture, OS, and Version:**
MBP 14 M1 PRO
**Describe the bug**
Not working make build and make BUILD_TYPE=metal build
**To Reproduce…