Closed urbste closed 11 months ago
I think its due to autocast not wanting to mix bfloat16 and float16, the .half() call I think I only used to try to save memory. If you dont have any memory issues then float should be fine, its probably also fine to not cast at all as I guess linspace is automatically float32.
Btw reason for bfloat16 is that its more stable during training, but I think I should make it optional as I think float16 can be faster at inference.
I will close this issue. Brought it up, if someone else has this problem. Thanks for the response :)
Thanks for bringing it up! I'll keep it in mind when cleaning up the code :)
Hello, I have the same problem, it still does not work after modifying it according to your method, how to proceed? Unexpected floating ScalarType in at::autocast::prioritize
You could try explicitly casting everything to float32? Whats your pytorch/cuda version?
python==3.7 pytorch==1.10.2 cuda==11.3
Thanks @urbste this saved me from searching for hours what the issue is ^^.
As many people seem to be having this issue Ill try to fix it.
@mpizenberg @LooperzZ , are your issues resolved with https://github.com/Parskatt/RoMa/pull/3 ?
This is my diff
ah sorry I missinterpreted the question. I can check #3 and let you know.
@mpizenberg @LooperzZ , are your issues resolved with #3 ?
Indeed, I don’t have this issue anymore in that branch so I’d say it’s solved.
Thanks very much! By removing the half precision from local_window_coords and converting feature1 to float using float(), the issue of 'Unexpected floating ScalarType in at::autocast::prioritize' is resolved. System/Env:
PS:I have already tried version #3, and the issue does not occur again.
I was just trying out the demo_fundamental.py demo and ran into the following error:
It occurs in the grid_sampler here (local_correlation.py, line 40)
It can be solved by removing the half precision of local_window_coords and converting feature1 to float(). However, I am not sure about any implications that might have.
System/Env: