Closed laurids-reichardt closed 1 year ago
Just confirmed that the example above runs fine on linux amd64 hardware with cuda support.
Thanks for giving txtai a try and taking the time to submit an issue!
Couple ideas:
Try downgrading pytorch to 1.11. pytorch==1.12.x
has had segfault issues on macOS.
pip install torch==1.11.0 torchvision==0.12.0
Try using a different index backend. While Faiss is supported on Apple Silicon, I'm not sure how well supported it is.
embeddings = Embeddings({"path": "sentence-transformers/nli-mpnet-base-v2", "backend": "hnsw"})
Unfortunately, I don't use Apple hardware, so it would be tough for me to debug/reproduce. txtai does have GitHub actions for macOS but it's x86-64 based. There is a long standing issue to add Apple Silicon support to GitHub Actions but it looks like it's currently unresolved.
Hi @davidmezzetti, thank you for developing and publishing this great library!
Indeed, changing the backend to hnsw
worked out. Thanks for the tip!
Yes, unfortunately many ML libraries only partially support macOS and/or arm64. In most cases bigger experiments or production workloads will run on Linux with CUDA anyway, but it's always nice to be able to try out libraries on local hardware first. Great to see that it's possible to run txtai on Apple hardware!
Glad to hear it!
👍
FWIW did the pip from https://github.com/neuml/txtai/issues/350#issuecomment-1264773406 withpip install txtai[similarity]
and it seem to work now even without hnsw
backend
Searching an embeddings index, like demoed in the first txtai example, seems to lead to a segmentation fault on Apple Silicon hardware. This is the script I'm executing:
Output:
Enviroment:
I'm happy to provide further information if needed.