YangLabHKUST / SpatialScope

A unified approach for integrating spatial and single-cell transcriptomics data by leveraging deep generative models
https://spatialscope-tutorial.readthedocs.io/en/latest/
GNU General Public License v3.0
43 stars 4 forks source link

Does SpatialScope support distributed training using GPUs from multiple nodes? #7

Open dbxmcf opened 5 months ago

dbxmcf commented 5 months ago

To whom it may concern:

The current training script uses gpus from a single host node (i.e., all 4 gpus are on the same machine):

python ./src/Train_scRef.py \
--ckpt_path ./Ckpts_scRefs/Heart_D2 \
--scRef ./Ckpts_scRefs/Heart_D2/Ref_Heart_sanger_D2.h5ad \
--cell_class_column cell_type \
--gpus 0,1,2,3 \

Does SpatialScope support distributed training using GPUs from different compute nodes (e.g., 4 gpus from two different nodes, 2 GPUs per node) which is common under a cluster environment (similar to https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel)? Thanks a lot!

Feng