Lightning-AI / pytorch-lightning

Pretrain, finetune ANY AI model of ANY size on multiple GPUs, TPUs with zero code changes.
https://lightning.ai
Apache License 2.0
28.51k stars 3.39k forks source link

Replace mentions of `.type_as()` in our docs #14554

Closed awaelchli closed 2 years ago

awaelchli commented 2 years ago

📚 Documentation

As a follow up to #2585, we should consider removing mentions of the .type_as() syntax in our docs and replace it with best practices for device placement and type conversion.


If you enjoy Lightning, check out our other projects! âš¡

cc @borda @rohitgr7 @Felonious-Spellfire

nsarang commented 2 years ago

Tensor.to can be used as a safe replacement for type_as. It also accepts another tensor as input.

From the documentation:

torch.to(other, non_blocking=False, copy=False) → Tensor Returns a Tensor with same torch.dtype and torch.device as the Tensor other. When non_blocking, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. When copy is set, a new Tensor is created even when the Tensor already matches the desired conversion.

yipliu commented 2 years ago

For me, I often use the following methods

# As the LightningModule, self.deivce is known
torch.zeros(B, S, device=self.device)
amrutharajashekar commented 2 years ago

Hey @awaelchli can i work on this issue ? I would like to contribute .

awaelchli commented 2 years ago

@amrutha1098 Yes, that would be greatly appreciated! And please let me know if you need any further guidance regarding this issue or the contribution process.

amrutharajashekar commented 2 years ago

Hi @awaelchli , I have submitted the PR with the changes. Please let me know if any changes are required. Thanks for the help !!