sabagh1994 / BEDwARS

BEDwARS is a Bayesian approach to deconvolving bulk expression profiles using noisy reference expression profiles (“signatures”) of the constituent cell types.
MIT License
9 stars 2 forks source link

How to run the code on a MacOs system? #2

Open stephanienguyen2020 opened 1 year ago

stephanienguyen2020 commented 1 year ago

Hi, I'm using a Mac M1 laptop, which doesn't have GPU. In Curriculum.py and Inference.py, there are snippets of the code that prompt users to import cuda/cupy, which requires GPU, which my laptop doesn't support. I have come up with a few solutions:

1. Modified the code so it no longer requires CUDA/CUPY. For example, here was what I changed for Inference.py: dev = "cpu" # Force CPU usage device = torch.device(dev) tch_dtype = torch.float32 use_cupy_sum = True

if use_cupy_sum: from torch.utils.dlpack import to_dlpack from torch.utils.dlpack import from_dlpack

def torch_sum(input, dim, keepdim):
    if input.dtype == torch.bool:
        return torch.sum(input, dim=dim, keepdim=keepdim)
    else:
        input_cpu = input.to(device)  # Move the tensor to CPU
        torch_sum_result = torch.sum(input_cpu, dim=dim, keepdim=keepdim)
        return torch_sum_result

else: torch_sum = torch.sum

np.set_printoptions(precision=2, suppress=True)

However, I still couldn't run the program, my log-infer documented this:

Screen Shot 2023-10-18 at 13 41 44
  1. Keep the code intact and run it on Jupiter Notebook/Google Collab. However, I'm not sure how we can combine the files/folders to run the whole program there (given the complexity of the program/we would have to download datasets, etc.)

Thank you for your help!

sabagh1994 commented 1 year ago

Thanks for asking. I will get back to you with the answer to item 1 later. For now, I highly recommend GPU to be used for BEDwARS as it takes too long to run on CPU depending on the size of your data and the number of random walk steps. I have not used Google Collab. Could you please elaborate on the issue you are facing? If you can store the files generated by BEDwARS on Google Collab then you can easily run inference.py on it.