-
res =guardrails.invoke({"input":"How do I cook meat"})
0.5s
I'm defining a chain, not using it ! the llm is local, while the llm in the yml file is openAI
chain = print_func|(guardrails |llm)| …
-
### Bug Description
I did the following;
!pip install llama-index
!pip install llama-index-llms-sambanova
### Version
latest
### Steps to Reproduce
just followed LLM example for sambanova
### …
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
### Version
Command-line (Python) version
### Operating System
Linux (other)
### What happened?
When I try to run a project again, to add a new feature - I got an gpt-pilot crash.
```
[Tech Lea…
-
## ❓ Questions and Help
Can we customize base_url for openai compatible LLM models instead of using openai models?
I didn't find this setting in .env example file. Appreciate if this can be supporte…
-
Hi,
I am unable to import LlamaCpp in IPEX
CODE : from ipex_llm.langchain.llms import LlamaCpp
ERROR
Cell In[5], [line 1](vscode-notebook-cell:?execution_count=5&line=1)
----> [1](vscode-note…
-
environment:
python 3.9.20
datasets 3.0.1
langchain 0.3.3
langchain-community 0.3.2
langchain-core 0.3.10
langchain-openai 0.2.2
la…
-
**Title:** Automatically label medical data from diagnosis reports
**Project Lead:** Frank Langbein, frank@langbein.org
**Description:** We wish to automatically label medical diagnosis data (MRI,…
-
I'm trying to make the model generate emojis using this command:
```
./run.sh $(./autotag local_llm) python3 -m local_llm.chat --api=mlc --model=NousResearch/Llama-2-7b-chat-hf --prompt="Repeat th…
-
> > Specify the local folder you have the model in instead of a HF model ID. If you have all the necessary files and the model is using a supported architecture, then it will work.
> > …