As I mentioned on Wednesday, we are getting an CUTENSOR_STATUS_NOT_SUPPORTED when doing lbann.Tessellate (also with lbann.MultiDimReduction mentioned previously #2429) with distconv.
I found that the issue only occurs when mini-batch-size = 1 (or I guess, when mini-batch size < number of gpus). I put the relevant lines from the cuTENSOR log below.
The last entry in the “extent”, which appears to be batch dimension, is split across each gpu (I thought it would be copied). When there is only a single sample, this causes process 0 to have size = 1 in the batch dimension, and process 1 to have size = 0.
Hi
As I mentioned on Wednesday, we are getting an CUTENSOR_STATUS_NOT_SUPPORTED when doing lbann.Tessellate (also with lbann.MultiDimReduction mentioned previously #2429) with distconv.
I found that the issue only occurs when mini-batch-size = 1 (or I guess, when mini-batch size < number of gpus). I put the relevant lines from the cuTENSOR log below.
The last entry in the “extent”, which appears to be batch dimension, is split across each gpu (I thought it would be copied). When there is only a single sample, this causes process 0 to have size = 1 in the batch dimension, and process 1 to have size = 0.
Josh