Closed RuizhiZhu closed 1 year ago
Thanks for your interest in our challenge. @robert-graf @FelixSteinbauer can you please have a look?
Hey @RuizhiZhu
If train_p
and val_p
add up to 1.0 ,then the lengths
parameter is interpreted as ratio/fraction of the split (e.g. 80 to 20 for [0.8, 0.2]
). If they do not add up to 1.0, torch.utils.data.random_split
expects the lengths of the desired output datasets, which must add up to the length of the input dataset (this is the cause of the original error message). Both ways are valid according to the official documentation, so I would not consider this a bug.
Nevertheless, you have to ensure in both scenarios that your lengths
list either adds up precisely to 1.0 or precisely to the length of the input dataset.
What did you use as lengths
that resulted in the above error message?
Glad to receive your reply! @FelixSteinbauer
I didn't make any changes to the code for the above error message.
I compared the official documentation with the local code and I think it's the lower version of my pytorch
that is causing the problem.The following local code shows that it does not support splitting data using ratio.
This seems to be a versioning issue then. Support for ratios in random_split
was added in pytroch=1.13 (compare to pytroch=1.12).
I assume your pytroch version was/is below 1.13 ? (use pip show torch | grep Version
to check)
In our environment.yml we specified pytorch=2.0.0
(on cuda 11.7). I myself tested with pytorch 2.1.0 and cuda 12.1.
I suppose other participants might run into the same problem though. I will add an comment that references this issue.
Hello, When I run
train_Pix2Pix3D.py
, I encounter some problems. The error statement is in the following.After consulting the
torch.utils.data.random_split
function, I found that thelengths
parameter requires a specific number rather than a ratio. So I made the following changes to the code.Then the code works fine. I am not sure if this is a bug?