-
Dear @flowersteam,
trying to reproduce your results for coursework.
I find there are a number of issues in running the code. Here is a list of what I found so far.
## Importing
Several files…
-
Is it possible to create a memgpt feature and make it available to all the agents rather than having a separate agent like it's discussed in #530?
-
Got this funny message when using text-gen plugin:
```
File "/home/user/workspace/other/llamaindex_rag/chat_server.py", line 81, in chat_with_data
chat_engine = index.as_chat_engine(
…
-
### 🐛 Describe the bug
While iterating on getting torch.export to work on a model with dynamic shapes, I hit the assertion around line 1640 of `symbolic_shapes.py` for the operator `
-
Hi Team,
I have just installed Wren with Ollama with this config
`
LLM_PROVIDER=ollama_llm
GENERATION_MODEL=mistral-nemo:latest
EMBEDDER_PROVIDER=ollama_embedder
EMBEDDING_MODEL=mxbai-embed-…
-
I am trying to use Trt_llm rag with Mistral 7B model.
I have used int8 weight-only quantization during the building of the TRT engine.
The app launches but drops an error when an input is passed to …
-
Hello there:
is there any chance to get ollama working on freebsd please??
-
Hi I finally got it working and I'm going to share my step by step to make this work.
#### My system:
RTX 3060 12GB
CUDA 12.1
Windows 10
PHPSTORM 2023.2.4
## Step 1 - Install TGI
Follow the…
-
### 🐛 Describe the bug
my code
```python
import os
from mem0 import Memory
# os.environ["OPENAI_API_KEY"] = "none" # for embedder
config = {
"llm": {
"provider": "ollama",
…
-
Given the provided codebase spanning multiple files and responsibilities, from configuration and logging utilities to the integration with external services like GitHub and an LLM, pinpointing a singl…