cantinilab / scPRINT

single cell foundation model for Gene network inference and more
https://cantinilab.github.io/scPRINT/
MIT License
20 stars 2 forks source link

scPRINT is hanging forever #3

Closed jkobject closed 2 weeks ago

jkobject commented 1 month ago

I downloaded the checkpoints from hugging face and loaded them. I am up to the embedder step in this tutorial https://github.com/jkobject/scPRINT/blob/main/docs/notebooks/cancer_usecase.ipynb

I first ran

adata, metrics = embedder(model, adata, cache=False, output_expression="none")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: Embedder.__call__() got an unexpected keyword argument 'output_expression'

Then I ran with "output_expression" parameter removed. However it stops and automatically quits my python terminal. (I am running python interactively inside a conda env). I am wondering if this is a memory issue (currently using 1 GPU with 128GB). Should I try increasing the memory?

adata, metrics = embedder(model, adata, cache=False)
0%|                                                           | 0/1304 [00:00<?, ?it/s] 
(quits python terminal here)                                                                                                                                                                               

Originally posted by @kavithakrishna1 in https://github.com/jkobject/scPRINT/issues/9#issuecomment-2309345049

jkobject commented 1 month ago

The code that the model is running is flash attention 2. It is not a dependency but part of the model. ScPRINT does it through triton. I have never tested scPRINT on 11.4.. So, you would have to use pdb and check if the model.predict() function gets called within the embedder class. Also can you check if the GPU memory gets used?

Finally to test it, you should set the input context to 200 and the minibatch size to 1 to check what happens.. maybe it is not using the GPU. (These are parameters of the embedder class) e.g. embedder = Embedder(batch_size=1,num_workers=1, max_len=200) and maybe use an adata of only a couple cells

To make sure that this is due to triton, you can run the model with regular attention by doing: model = scPrint.load_from_checkpoint( ckpt_path, precpt_gene_emb=None, transformer="normal")

kavithakrishna1 commented 1 month ago

Hi @jkobject

It seems like GPU is being used according to the outputs by calling Embedder.

>>> embedder = Embedder( 
...                     # can work on random genes or most variables etc..
...                     how="random expr", 
...                     # number of genes to use
...                     max_len=4000, 
...                     add_zero_genes=0, 
...                     # for the dataloading
...                     num_workers=8, 
...                     # we will only use the cell type embedding here.
...                     pred_embedding = ["cell_type_ontology_term_id"]
...                     )#, "disease_ontology_term_id"])
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs

I also just ran pip install --upgrade scprint and checked output_expression parameter, however it didn't work. Also tried pip install scprint[dev]

adata, metrics = embedder(model, adata, cache=False, output_expression="none")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: Embedder.__call__() got an unexpected keyword argument 'output_expression'
kavithakrishna1 commented 1 month ago

Ok I ran it with pdb. And error message is /tmp/tmpitjn_6yl/main.c:6:23: fatal error: stdatomic.h: No such file or directory. I am not sure where this is coming from. Any ideas?

(Pdb) embedder = Embedder(how="random expr", batch_size=1, max_len=200, add_zero_genes=0, num_workers=1, pred_embedding = ["cell_type_ontology_term_id"])
Using 16bit Automatic Mixed Precision (AMP)
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
HPU available: False, using: 0 HPUs
(Pdb) adata, metrics = embedder(model, adata, cache=False)
  0%|                                                       | 0/83451 [00:00<?, ?it/s]
/tmp/tmpitjn_6yl/main.c:6:23: fatal error: stdatomic.h: No such file or directory
 #include <stdatomic.h>
                       ^
compilation terminated.
  0%|                                                       | 0/83451 [00:00<?, ?it/s]         
*** subprocess.CalledProcessError: Command '['/bin/gcc', '/tmp/tmpitjn_6yl/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmpitjn_6yl/cuda_utils.cpython-310-x86_64-linux-gnu.so', '-lcuda', '-L/directflow/SCCGGroupShare/projects/kavkri/.conda/envs/scprint-env/lib/python3.10/site-packages/triton/backends/nvidia/lib', '-L/lib64', '-L/lib', '-I/directflow/SCCGGroupShare/projects/kavkri/.conda/envs/scprint-env/lib/python3.10/site-packages/triton/backends/nvidia/include', '-I/tmp/tmpitjn_6yl', '-I/directflow/SCCGGroupShare/projects/kavkri/.conda/envs/scprint-env/include/python3.10']' returned non-zero exit status 1.                                         
jkobject commented 1 month ago

for the first message, this is because output_expression is now part of the class init function so you need to do

>>> embedder = Embedder( 
...                     # can work on random genes or most variables etc..
...                     how="random expr", 
...                     # number of genes to use
...                     max_len=4000, 
...                     add_zero_genes=0, 
...                     # for the dataloading
...                     num_workers=8, 
...                     # we will only use the cell type embedding here.
...                     pred_embedding = ["cell_type_ontology_term_id"]
...                     output_embeddier="none" #default value, can be dropped
...                     )
jkobject commented 1 month ago

for the second part, the error is quite cryptic. seeing GPU: used, just means the model will try to use it but not that it succeeded in doing so. commands like nvtop will show you GPU usage in real time.

By using pdb, I wanted to know at which line exactly in the embedder.call function does the code hangs?

Have you tried running a test version without flashattention as I mention in my second comment?

Seeing your new error here *** subprocess.CalledProcessError: Command '['/bin/gcc', '/tmp/tmpitjn_6yl/main.c', '-O3', '-shared', '-fPIC', '-o', '/tmp/tmpitjn_6yl/cuda_utils.cpython-310-x86_64-linux-gnu.so', '-lcuda', '-L/directflow/SCCGGroupShare/projects/kavkri/.conda/envs/scprint-env/lib/python3.10/site-packages/triton/backends/nvidia/lib', '-L/lib64', '-L/lib', '-I/directflow/SCCGGroupShare/projects/kavkri/.conda/envs/scprint-env/lib/python3.10/site-packages/triton/backends/nvidia/include', '-I/tmp/tmpitjn_6yl', '-I/directflow/SCCGGroupShare/projects/kavkri/.conda/envs/scprint-env/include/python3.10']' returned non-zero exit status 1. my guess is that you have a problem with your pytorch / GPU / cuda installation and it is not related to scPRINT but I might be wrong. Have you used pytorch with your GPU before?

jkobject commented 3 weeks ago

I will mark this issue as closed since I received no replies in the past month.

fantashi099 commented 2 days ago

Hi jkobject,

I am facing the same issue from the embedder here:

adata, metrics = embedder(model, adata, cache=False)
0%|                                                           | 0/1304 [00:00<?, ?it/s] 
(I cannot quit python terminal, I can only kill the python process)                                                                                                                                                                               

I believe that it comes from the triton with flashattention because the process is stuck/died after running this:

transformer_output = self.transformer(
            encoding,
            return_qkv=get_attention_layer,
            bias=bias if self.attn_bias != "none" else None,
            bias_layer=list(range(self.nlayers - 1)),
        )

I confirm that I can run embedder normally with transformer="normal". For some reason, I can only use the CUDA driver 11.7, so my pytorch version is 2.0.1-cu11.7 with triton 2.0.0.