Closed gramss closed 10 months ago
Thanks for the detailed message. I indeed added the correct import statement in server_connector.py. Can you clarify the second item you mentioned, to wit, test_embeddings=true related error? I didn't quite understand and have never seen that before; and it works fine on my system...
Regarding the model download, yes, you likely didn't want long enough. Sometimes it's deceptive as to when it's done AND "unpacked" but there should be something printed to the command prompt when it's done.
Here are some issues I ran into while testing my system 3.04 on MacOS alongside with LMStudio 0.2.10 on a M2:
server_connector.py
:import sys
test_embeddings = true
This part of the test:leads to:
slight note:
bark
gracefully throws an Exception when there is no chat_history.txt available~~The (more or less) recommended embedding_model
hkunlp--instructor-xl
does not work out of the box. Thepytorch_model.bin
is located in a sub-directory2_Dense
alongside a dedicated config.json. I needed to copy thismodel
into the top directory of the embedding_model. Maybe it makes sense here to do a deep search for aPyTorch_model.bin
or other supported formats?~~While checking out the huggingface website of this model, I noticed that the files in question are git-LFS files. Probably I wasn't waiting long enough to receive the >4GB model via LFS. -> A status that downloading is still being down would be superb!
also: it would be cool to have persistent settings. But I think this is already mentioned somewhere.
I hope these information are helpful. If my system works nicely, I would be happy to contribute back, but am I little bit unsure how, as there is no tutorial to install the package from the source in git.. Any help/advice regarding that to contribute my now and future findings back?