Lan-lab / SIGNET

4 stars 5 forks source link

Analysis taking long time #7

Open bvaldebenitom opened 1 year ago

bvaldebenitom commented 1 year ago

Hi!

I've been running some analysis, and they seem to be stuck at the NTF-Training stage. Here is what I get as output:

data loaded!
Binarization Begin!
Binarization Completed!
NTF-Training Begin!
scanpy/preprocessing/_normalization.py:170: UserWarning: Received a view of an AnnData. Making a copy.
  view_to_actual(adata)
Signet_continue.py:279: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  labels = torch.tensor(labels, dtype=torch.long)
Signet_continue.py:283: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  labels = torch.tensor(labels, dtype=torch.long)
Signet_continue.py:284: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  inputs = torch.tensor(inputs, dtype=torch.float32)
Signet_continue.py:306: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  im = torch.tensor(im, dtype=torch.float32)
Signet_continue.py:327: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  test = torch.tensor(test, dtype=torch.float32)
Signet_continue.py:334: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
  torch.argmax(model(torch.tensor(torch.from_numpy(data_tf_binary_train), dtype=torch.float32)), dim=1),

Is there any way to speed up the analysis and/or now if it is actually progressing? On the NTF-Training nothing doesn't seem to occur so far.

I already ran this on another dataset with success.

luoqh17 commented 1 year ago

Yes, it is progressing. The MLPs designed for each NTF in this program are independent, so to accelerate the process, the for loop can be modified into parallel operations or GPU training can be utilized. Over this period of time, I will provide a few modified versions. You can also modify the for loops in SIGNET.py yourself. If time is a critical factor, reducing the number of NTFs can be considered first.