Closed DanLesman closed 9 months ago
This was a simple fix, I added the below code to the train_auto_encoder function
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = model.to(device)
Hi, adding that to the train_auto_encoder function cause a error, RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_addmm)
.
I was hoping to use CellOT on full scRNA-seq data and was wondering what the training times for that should look like and if there is any way to accelerate training. I'm currently running scGen to get the autoencoder embeddings and I'm getting predicted runtimes of 594hrs on 1 GPU for 20k genes in 3k cells and 8hrs for 1k genes in 3k cells.
Thank you!