Closed philiprhoades closed 7 months ago
I've got the same problem trying to contact ollama on windows. I tried with memgpt version 0.3.7, 0.3.6 and 0.3.5 each time reinstalling 'pymemgpt[local]' through pip install
I have the same problem on Google Colab (llama.cpp).
I also tried memgpt 0.3.7 -> 0.3.6 and got the same error with both versions.
I suspected that Google Colab's default python and pip environment was having an effect, so I tried Python in the order of 3.10 -> 3.11, but the same error occurred with every version.
The error seems to be related to the transformer, so the situation may change if you clone memgpt from git, change the transformer version, and then install it.
(Sorry, I don't have time to try this right now)
I look forward to this issue being resolved soon.
The error seems to be related to the transformer, so the situation may change if you clone memgpt from git, change the transformer version, and then install it.
I don't quite understand why you think building from git will make a difference - can you expand on that idea?
If you make use of llama_index, this may help: https://github.com/run-llama/llama_index/pull/11939#issuecomment-2020853800 (bug in llama-index-embeddings-huggingface 0.1.5, solved in 0.2.0)
Solved my issue: Thanks @farbel
llama-index-embeddings-huggingface 0.1.5, solved in 0.2.0
Not mine:
pip install llama-index-embeddings-huggingface upgrade Requirement already satisfied: llama-index-embeddings-huggingface in /usr/local/lib/python3.11/site-packages (0.1.5) ERROR: Could not find a version that satisfies the requirement upgrade (from versions: none) ERROR: No matching distribution found for upgrade
I am using an old Fedora (38) so I can use Python3.11 - which I needed to do to fix an earlier problem . .
@YanSte ,
Did you read my response? - I tried to update llama-index-embeddings-huggingface to 0.2.0 but CAN'T . .
The error seems to be related to the transformer, so the situation may change if you clone memgpt from git, change the transformer version, and then install it.
I don't quite understand why you think building from git will make a difference - can you expand on that idea?
Sorry for the late reply.
As for my ideas, I'm sorry that they weren't helpful.
The issue was resolved in my environment by upgrading "llama-index-embeddings-huggingface" as already discussed in this issue.
Specifically, it was resolved by the following steps:
pip install 'pymemgpt[local]'==0.3.6
pip install --upgrade llama-index-embeddings-huggingface
In my environment, I checked with pymemgpt version 0.3.6.
Also, in pymemgpt version 0.3.6, "llama-index-embeddings-huggingface" is installed with version 0.1.5, and can be changed by the command "pip install --upgrade llama-index-embeddings-huggingface". , has been upgraded to version 0.2.0.
I wasn't able to reproduce this error, but am upgrading the llama-index-embeddings-huggingface
package and fixed an different bug with the embedding_model
not being properly set for the "local" embedding model option.
In general, I'd also recommend using the endpoint provided by memgpt quickstart
instead of computing embeddings locally, since the model we're using for local computations has very poor performance.
Describe the bug: Error for memgpt talking to local LLM.
Please describe your setup: What is the output of memgpt version? (eg "0.2.4") 0.3.7
How did you install memgpt? pip install pymemgpt and: pip install 'pymemgpt[local]'
Describe your setup: What's your OS: Fedora Linux v38 (for Python3.11)
How are you running memgpt? tmux Terminal
I have successfully installed oobabooga in Fedora Linux, downloaded and loaded "ehartford_dolphin-2.2.1-mistral-7b" and can chat happily from the oobabooga console but when I try and connect with memgpt using either the airoboros-l2-70b-2.1 or the dolphin-2.1-mistral-7b wrapper I get an error: