Closed tvercaut closed 1 year ago
This is a good suggestion, thanks for bringing it up! One question is about the expected behavior when passing torch.device("cuda)
. Should we automatically pass to "cuda:0"
on our side? As you say, this seems to be the case for torch tensors, but for some reason torch.device()
doesn't do this automatically. Not sure if this is an oversight on Pytorch's side or a deliberate choice.
Added feature in #623.
🚀 Feature
It would be great to be able to call
as well as
Currently this fails for two reasons:
TheseusLayer.to
returnsNone
and this the one-liner needs to be split in 2:theseus_optim = th.TheseusLayer(optimizer); theseus_optim.to(torch.device('cuda'))
cuda:0
in PyTorch tensors and the comparison then fails inObjective.update
https://github.com/facebookresearch/theseus/blob/9a117fd02867c5007c6686e342630f110e488c65/theseus/core/objective.py#L775-L780Motivation
This would make the use of Theseus more convenient.
Pitch
See above
Alternatives
See above
Additional context
https://github.com/facebookresearch/theseus/blob/9a117fd02867c5007c6686e342630f110e488c65/theseus/theseus_layer.py#L137-L140
Other issues with
to
have been discussed in: