-
### 🐛 Describe the bug
My current code:
```js
import { RAGApplicationBuilder, LocalPathLoader } from '@llm-tools/embedjs';
import { OpenAiEmbeddings } from '@llm-tools/embedjs-openai';
import { …
-
### Describe the bug
interpreter --local
Open Interpreter supports multiple local model providers.
[?] Select a provider:
> Ollama
Llamafile
LM Studio
Jan
…
-
### 🚀 The feature
error message:
PS E:\users\xxx\Desktop\xxx\_code> cd .\study\llm\pandas-ai-main\
PS E:\users\xxx\Desktop\xxx\_code\study\llm\pandas-ai-main> docker-compose build
Failed to load E…
-
I used the interface from the vllm repository (https://github.com/vllm-project/vllm) to load the model and ran
```bash
torchrun --nproc-per-node=8 run.py --data Video-MME --model Qwen2_VL-M-RoPE-80…
-
Hi,
I am interested to use a draft model as speculative decoding, and the only example I found is: https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/draft_target_model
We use tensorRT LLM (…
-
**Describe the bug**
We get an incorrect formatting error when attempting to import new data.
```
Validation error
Error at item 0: "llm.inputs.retrieved_context" key is expected in task data [ass…
-
# OPEA Inference Microservices Integration for LangChain
This RFC proposes the integration of OPEA inference microservices (from GenAIComps) into LangChain [extensible to other frameworks], enabli…
-
### Describe the bug
When using `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0`, the interpreter crashes if two consecutive user messages are sent.
### Reproduce
▌ Model set to bedrock/anthro…
-
**Is your feature request related to a problem? Please describe.**
I'm opening this feature to track customers' most recent requests around our experimental Metabot features and further development a…
-
The chat interface does not permit the user to pass in a temperature parameter to be used with the LLM. Nor does it reveal what the default temperature parameter is for the LLM being used. One use cas…
srdas updated
2 weeks ago