casts where we know another tensor of the type we need but don't have to explicitly specify the precision
Something annoying that took a long time to track down was that in the definition of some _demo functions some parameters got torch.tensor default values whose dtype was set at import time, which can be before a user might torch.set_default_dtype(torch.float64) (for example). Since the default dtype is float32 you can run into type conflicts. Extremely annoying.
The way around this was to default them to None and then check for None and create them. Annoying, but safe.
This addresses #13 as best as possible.
Some casts that remain are:
Something annoying that took a long time to track down was that in the definition of some
_demo
functions some parameters gottorch.tensor
default values whose dtype was set at import time, which can be before a user mighttorch.set_default_dtype(torch.float64)
(for example). Since the default dtype is float32 you can run into type conflicts. Extremely annoying.The way around this was to default them to
None
and then check for None and create them. Annoying, but safe.