I have understood how the nnunetv2 framework uses cpu inference, which is very convenient. But it seems that the nnunetv1 code can't set the device directly, do you know how to use cpu reasoning in the model inference phase and measure the inference time?
I have understood how the nnunetv2 framework uses cpu inference, which is very convenient. But it seems that the nnunetv1 code can't set the device directly, do you know how to use cpu reasoning in the model inference phase and measure the inference time?