Open haikunhu opened 2 months ago
https://stackoverflow.com/questions/75619847/python-sklearn-openblas-error-for-kmeans
Try to add export OPENBLAS_NUM_THREADS=1
at the top of the shell script.
https://stackoverflow.com/questions/75619847/python-sklearn-openblas-error-for-kmeans
Try to add
export OPENBLAS_NUM_THREADS=1
at the top of the shell script.
Thank you so much for this solution and it worked. I ran into another error when mapping other samples from the same batch of data. All the samples were prepared with the same kit (Singleron Biotechnologies, 4180011) and the .sh script were generated by the same multi_RNA command. I'd appreciate it if you could fix this again.
Exit status 102 usually means an error related to memory or disk. https://github.com/alexdobin/STAR/issues/1236
I think the real error message of the STAR should be a little above this screenshot.
Add the paramter --STAR_param "--limitBAMsortRAM 20000000000"
to multi_rna
.
Add the paramter
--STAR_param "--limitBAMsortRAM 20000000000"
tomulti_rna
.
Problems solved! Thank you so much and you can close this issue.
When I run the .sh script generated by multi_RNA, the following error occurs: OpenBLAS warning: precompiled NUM_THREADS exceeded, adding auxiliary array for thread metadata. To avoid this warning, please rebuild your copy of OpenBLAS with a larger NUM_THREADS setting or set the environment variable OPENBLAS_NUM_THREADS to 64 or lower Segmentation fault (core dumped) Data from a certain sample in the same batch of data can be mapped successfully, while some sample encountered the error above. How can I fix this? Version celescope 2.0.7