Open cywuuuu opened 2 days ago
My RAM is 500GB, and i have not observe the surge of ram usage when running the code (or maybe everything happened too fast?), If anyone could help me understand the exact cause, I would greatly appreciate it.
hi @cywuuuu, I just tried to re-run the code you provided and use feat_size = 1024
. The code is running fine. Probably you want to isolate the issue from your code to see where the seg_fault happened
Well, it is quite strange because I do have that issue when rerunning the code, I give my detail as below(The segment fault core dump
is described in Chinese 已放弃 (核心已转储)
, sorry for that because i am using Chinese Ver of Ubuntu)
And the following is the package I am using:
I am curious why. Hopefully you can help me out, I would greatly appreciate it,thanks @junjieqi
@cywuuuu thanks for providing additional information. I'm wondering why you are using faiss-gpu. Since I saw you initialize res = faiss.StandardGpuResources()
, but that parameter never got used. Do you want to remove that one first and re-run the code?
Summary
I am trying to run a baseline model VQREC pq.py I met
munmap_chunk(): invalid pointer \n segmentation fault (core dump)
, whenfeat_size
is bigger than 1024, but whenfeat_size = 768
its ok. I minimized the code below, so it can help with Reproduction the issue. (I am new to faiss, but I would like know how to fix it, thanks)Platform
OS: Ubuntu 20.04 Faiss version: faiss 1.9.0 Installed from: anaconda Faiss compilation options:
Running on:
Interface:
Reproduction instructions
Just run the code, with different feat_size.