-
### The Feature
The current code for initializing an offline LLM in LiteLLM is as follows
```python
# check if vllm is installed
def validate_environment(model: str):
global llm
t…
-
- This issue focuses on the technical courses we take about LLM, we'll put the paper part in
https://github.com/xp1632/DFKI_working_log/issues/70
---
1. **ChainForge** https://chainforge.ai/ …
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
[ ] I checked the [documentation](https://docs.ragas.io/) and related resources and couldn't find an answer to my question.
**Your Question**
what is unclear to you? What would you like to know?
…
-
### Title
Dreamcatcher: decoding dream-event in EEG data with multimodal language models and interpretability tools
### Leaders
Lorenzo Bertolini
### Collaborators
_No response_
###…
-
## 🐛 Bug
I am trying to run llava with mlc-llm. On both a linux server machine and a local MacOS, I encountered this error:
(run `export RUST_BACKTRACE=full` before running the inference program…
-
Hi, thanks for building and opening Savvy!
Is there any way I can configure it to use a locally-running LLM? With OpenAI-compatible API or otherwise.
Thanks!
-
### 🔖 Feature description
@dartpain Basically allow our AI's to eat images. good place to start is just open ai llms, even docsgpt free service should work
### 🎤 Why is this feature needed ?
Accep…
-
I'd like to have package `llm-ollama` in `nixhub.io` (which is `ollama` plugin for `llm`) so I can install it with existing `llm` package.
https://github.com/taketwo/llm-ollama
-
### Your current environment
GPU : H100 80G *2
Model : Llama 3.1 70B
Model Params:
~~~
env:
- name: MODEL_NAME
value: /mnt/models/models--meta-llama--llama-3-1-70b-i…