aertslab / pySCENIC

pySCENIC is a lightning-fast python implementation of the SCENIC pipeline (Single-Cell rEgulatory Network Inference and Clustering) which enables biologists to infer transcription factors, gene regulatory networks and cell types from single-cell RNA-seq data.
http://scenic.aertslab.org
GNU General Public License v3.0
416 stars 178 forks source link

[BUG] the binary binary_mtx, auc_thresholds = binarize(auc_mtx, num_workers = 15) get error #540

Open honghh2018 opened 4 months ago

honghh2018 commented 4 months ago

Describe the bug when i try to binarize the auc file in pyscenic final step, it getting error with : BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. ...

Mote that most errors are due to the input from the user, and therefore should be treated as questions in the Discussions. Please, only report them as bugs if you are quite certain that they are not behaving as expected. (pyscenic_20220616) [hhhong@node02 test_data]$ python test1.py ../PySCENIC/result/monocyte_macrophage_auc_mtx.csv

BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. BLAS : Program is Terminated. Because you tried to allocate too many memory regions. ^CProcess ForkPoolWorker-9: Process ForkPoolWorker-4: Process ForkPoolWorker-11: Process ForkPoolWorker-2: Process ForkPoolWorker-22: Process ForkPoolWorker-15: Process ForkPoolWorker-24: Process ForkPoolWorker-23: Process ForkPoolWorker-27: Process ForkPoolWorker-26: Process ForkPoolWorker-10: Process ForkPoolWorker-25: Traceback (most recent call last): File "/share/apps/miniconda3/envs/pyscenic_20220616/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/share/apps/miniconda3/envs/pyscenic_20220616/lib/python3.8/multiprocessing/process.py", line 108, in run

Steps to reproduce the behavior

  1. Command run when the error occurred:

name='test' binary_mtx, auc_thresholds = binarize(auc_mtx, num_workers = 15) binary_mtx.to_csv(os.path.join(result,'{}_auc_mtx_binary.csv'.format(name)))

import os, sys, glob, argparse, re, pickle
os.environ['MKL_THREADING_LAYER'] = 'GNU'
#sys.path.remove("/home/honghh/.local/lib/python3.7/site-packages")
import scanpy as sc
import pandas as pd
from pyscenic.cli.utils import load_signatures
from pyscenic.export import export2loom, add_scenic_metadata
from pyscenic.utils import load_motifs
from pyscenic.transform import df2regulons
from pyscenic.aucell import aucell
from pyscenic.binarization import binarize
from pyscenic.rss import regulon_specificity_scores
from pyscenic.plotting import plot_binarization, plot_rss
from IPython.display import HTML, display
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib as mpl
import loompy as lp

auc_matrix_path=sys.argv[1]
auc_mtx=pd.read_csv(auc_matrix_path)

# auc_mtx.to_csv(os.path.join(result,'{}_auc_mtx.csv'.format(name)))
#
name='test'
binary_mtx, auc_thresholds = binarize(auc_mtx, num_workers = 15)
binary_mtx.to_csv(os.path.join(result,'{}_auc_mtx_binary.csv'.format(name)))

...
  1. Error encountered:
    ...

Expected behavior A clear and concise description of what you expected to happen. how to fix this error , it weird that on different Centos7.9 machine, the binarize step can work on other machine
Please complete the following information:

ghuls commented 4 months ago

Run with less number of workers, or on a node which has more RAM available.