Closed awaelchli closed 2 years ago
Tensor.to
can be used as a safe replacement for type_as
. It also accepts another tensor as input.
From the documentation:
torch.to(other, non_blocking=False, copy=False) → Tensor Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.
For me, I often use the following methods
# As the LightningModule, self.deivce is known
torch.zeros(B, S, device=self.device)
Hey @awaelchli can i work on this issue ? I would like to contribute .
@amrutha1098 Yes, that would be greatly appreciated! And please let me know if you need any further guidance regarding this issue or the contribution process.
Hi @awaelchli , I have submitted the PR with the changes. Please let me know if any changes are required. Thanks for the help !!
📚 Documentation
As a follow up to #2585, we should consider removing mentions of the
.type_as()
syntax in our docs and replace it with best practices for device placement and type conversion.If you enjoy Lightning, check out our other projects! âš¡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Lite: enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
Bolts: Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
Lightning Transformers: Flexible interface for high-performance research using SOTA Transformers leveraging PyTorch Lightning, Transformers, and Hydra.
cc @borda @rohitgr7 @Felonious-Spellfire