pytorch / pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration
https://pytorch.org
Other
83.32k stars 22.47k forks source link

requires grad get lost during transform. #61410

Open dvolgyes opened 3 years ago

dvolgyes commented 3 years ago

🐛 Bug

If you convert a list of parameters with requires_grad=True into a tensor, you lose the gradients.

To Reproduce

Steps to reproduce the behavior:

import torch

t = torch.FloatTensor([1.0])
p = torch.nn.Parameter(t)

p.requires_grad = True

print(torch.Tensor(p).requires_grad)    # True
print(torch.Tensor([p,]).requires_grad) # False

Expected behavior

The output should be equivalent with:

torch.stack([...], dim=0)

Or the code should raise an exception.

Reasoning: whatever happens, a transformation like this should not silently remove the requires_grad. If legal, propagate it, if not, ban the use of it.

Environment

PyTorch version: 1.9.0 Is debug build: False CUDA used to build PyTorch: 11.1 ROCM used to build PyTorch: N/A

OS: Ubuntu 21.04 (x86_64) GCC version: (Ubuntu 10.3.0-1ubuntu1) 10.3.0 Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.33

Python version: 3.8.10 (default, Jun 4 2021, 15:09:15) [GCC 7.5.0] (64-bit runtime) Python platform: Linux-5.12.0-14.2-liquorix-amd64-x86_64-with-glibc2.17 Is CUDA available: True CUDA runtime version: 11.3.109 GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060 Nvidia driver version: 465.31 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A

Versions of relevant libraries:

[pip3] botorch==0.4.0
[pip3] efficientnet-pytorch==0.6.3
[pip3] gpytorch==1.5.0
[pip3] numpy==1.19.0
[pip3] pytorch-lightning==1.3.8
[pip3] segmentation-models-pytorch==0.1.3
[pip3] torch==1.9.0
[pip3] torchist==0.1.5
[pip3] torchmetrics==0.3.2
[pip3] torchvision==0.10.0
[conda] blas                      1.0                         mkl  
[conda] botorch                   0.4.0                    pypi_0    pypi
[conda] cudatoolkit               11.1.74              h6bb024c_0    nvidia
[conda] efficientnet-pytorch      0.6.3                    pypi_0    pypi
[conda] ffmpeg                    4.3                  hf484d3e_0    pytorch
[conda] gpytorch                  1.5.0                    pypi_0    pypi
[conda] mkl                       2021.2.0           h06a4308_296  
[conda] mkl-service               2.3.0            py38h27cfd23_1  
[conda] mkl_fft                   1.3.0            py38h42c9631_2  
[conda] mkl_random                1.2.1            py38ha9443f7_2  
[conda] numpy                     1.19.0                   pypi_0    pypi
[conda] pytorch                   1.9.0           py3.8_cuda11.1_cudnn8.0.5_0    pytorch
[conda] pytorch-lightning         1.3.8                    pypi_0    pypi
[conda] segmentation-models-pytorch 0.1.3                    pypi_0    pypi
[conda] torchist                  0.1.5                    pypi_0    pypi
[conda] torchmetrics              0.3.2                    pypi_0    pypi
[conda] torchvision               0.10.0               py38_cu111    pytorch

Additional context

I don't think this is anyhow related to install or libraries, this should be core autograd issue / property.

cc @ezyang @albanD @zou3519 @gqchen @pearu @nikitaved @soulitzer @Lezcano @Varal7 @gchanan @mruberry @jbschlosser

albanD commented 3 years ago

Hi,

The torch.Tensor() constructor is actually being deprecated for that exact reason that it can give un-expected behavior. You should use the torch.tensor() one (lowercase) that will always copy the data into a new Tensor. And you can pass the requires_grad= keyword argument to make sure the flag is set to what you want.

dvolgyes commented 3 years ago

Hi,

I confirm that the torch.tensor gives consistent results. If anybody looks this issue up later:

import torch
t = torch.FloatTensor([1.0])
p = torch.nn.Parameter(t)
p.requires_grad = True

print(torch.tensor(p).requires_grad)    # False
print(torch.tensor([p,]).requires_grad) # False

However, I do not see any deprecation warning in 1.9.0. Is it an official decision to deprecate torch.Tensor? If yes, is there / will be there a deprecation warning?

I think a warning would be reasonable until it gets removed.

mruberry commented 3 years ago

However, I do not see any deprecation warning in 1.9.0. Is it an official decision to deprecate torch.Tensor?

Yes, and...

If yes, is there / will be there a deprecation warning?

... yes!

I think a warning would be reasonable until it gets removed.

Agreed. We we need to revisit deprecating this soon, thank you for the reminder, @dvolgyes.