Does SpatialScope support distributed training using GPUs from different compute nodes (e.g., 4 gpus from two different nodes, 2 GPUs per node) which is common under a cluster environment (similar to https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel)? Thanks a lot!
To whom it may concern:
The current training script uses gpus from a single host node (i.e., all 4 gpus are on the same machine):
Does SpatialScope support distributed training using GPUs from different compute nodes (e.g., 4 gpus from two different nodes, 2 GPUs per node) which is common under a cluster environment (similar to https://pytorch.org/docs/stable/nn.html#module-torch.nn.parallel)? Thanks a lot!
Feng