shansongliu / MU-LLaMA

MU-LLaMA: Music Understanding Large Language Model
GNU General Public License v3.0
221 stars 16 forks source link

Error: 'f' failed: could not open ./ckpts/knn.index for reading: No such file or directory #2

Closed SongYii closed 1 year ago

SongYii commented 1 year ago

Traceback (most recent call last): File "/home/work/MU-LLaMA/MU-LLaMA/gradio_app.py", line 29, in model = llama.load(args.model, args.llama_dir, knn=True, llama_type=args.llama_type) File "/home/work/MU-LLaMA/MU-LLaMA/llama/llama_adapter.py", line 391, in load model = LLaMA_adapter( File "/home/work/MU-LLaMA/MU-LLaMA/llama/llama_adapter.py", line 143, in init self.index = faiss.read_index("./ckpts/knn.index") File "/home/work/anaconda3/envs/py310_torch113/lib/python3.10/site-packages/faiss/swigfaiss_avx2.py", line 10206, in read_index return _swigfaiss_avx2.read_index(args) RuntimeError: Error in faiss::FileIOReader::FileIOReader(const char) at /project/faiss/faiss/impl/io.cpp:67: Error: 'f' failed: could not open ./ckpts/knn.index for reading: No such file or directory

I can't find the file knn.index. Can you provide a copy or tell me how to generate it?

crypto-code commented 1 year ago

Thank you for pointing out the issue. A method to download the knn.index file has been added to the code and fixed in the latest commit (28e5e30442e0062a188302cff20bcb5ed0be8450).

# 5. knn
self.knn = knn
if knn:
    import faiss
    self.index = faiss.read_index(download("https://huggingface.co/csuhan/knn/resolve/main/knn.index", knn_dir))

Do check out the fix and close the issue if resolved. We would appreciate you starring our repo 😊

SongYii commented 1 year ago

Thank you very much for solving my problem. I also want to ask, approximately how much memory is needed?

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.68 GiB total capacity; 22.90 GiB already allocated; 69.19 MiB free; 22.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

shansongliu commented 1 year ago

Thank you very much for solving my problem. I also want to ask, approximately how much memory is needed?

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 23.68 GiB total capacity; 22.90 GiB already allocated; 69.19 MiB free; 22.92 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Our experiments are done on a 32GB V100 as described in the README.