-
Seems like an export is missing to use Pinecone DB
```
ts-node src/ai
Error: Package subpath './dist/vectorDb/pinecone-db' is not defined by "exports" in /
```
Same with cache
```
Error: …
-
I've installed the extension, edited the Settings to use OpenAI for the model, code completion, and (ada) embedding. Added my OpenAI key. No Wingman features work. They fail silently except for the ho…
-
The xrag-7b already knew about Motel 6 when I try tutorial.ipynb.
Was the model updated?
This is the response without RAG or xRAG.
> Motel 6. Motel 6 is a budget motel chain in the United Sta…
-
### Your current environment
```text
The output of `python collect_env.py`
```
### How would you like to use vllm
Hi, I want to use Offline Batched Inference to run multi chat. every pro…
-
Which way commonjs?
-
I would like to address an issue I am encountering while fetching context relevance and groundedness feedback metrics using TruLens.
I am evaluating huggingface's "meta-llama/Llama-2-7b-chat-hf" (4 b…
-
Hey, I've just been trying to catch this bug for half a day...
I've done `pip install git+https://github.com/sgl-project/sglang.git@51104cd#subdirectory=python`, which is the commit where 0.1.14 wa…
-
### Describe the bug
Crash with abort when trying to use AMD graphics card in editor
Model is mistral-7b-instruct-v0.2.Q4_K_M.gguf
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX…
-
Trying to take the README for a spin:
```
~/Desktop/cog-vllm main
$ python cog-vllm-helper.py \
--model-id mistralai/mistral-7b-instruct-v0.2 \
--model-url https://weights.replicate.deliver…
zeke updated
4 months ago
-
### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a…