Open YiChuangDai opened 1 year ago
To ensure unified implementation of different methods, our repo first transforms the model description into a PyTorch model before computing relevant scores. As a result, more content is cached during the search process, which is unnecessary for DeepMAD. To reduce the memory usage, the simple way is to change the "nproc=64" to "nproc=32/16/8" in the dist_search.sh . It will cost more search time.
in paper said that we can use cpu with small memory source ,but when i run
sh tools/dist_search.sh configs/classification/deepmad_29M_224.py
32G memory was used in 10 sec and the job has been aborted.