Closed soulhi-vz closed 4 years ago
Try: loss = mse(decoded, x)
.
with "loss = mse(decoded, x)" same issue: RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Can you print decoded.requires_grad
after self.forward(x)
in training_step? See whether it's True
or False
.
Hi Rohit, Got: decoded.requires_grad: False
That means no problem in the loss.backward() as of now. Can you check the same after decoded = self.layer_d_2(x)
? Or share a colab notebook?
decoded.requires_grad after layer_d_2: False decoded.requires_grad: False
decoded.requires_grad
should be True
. Not sure why it's happening. Mind share a colab notebook??
Hi Rohit, I am not using colab. Here the rest of the code:
def main(hparams) -> None:
model = Autoencoder(hparams)
print("Model")
print(model)
print("hparam")
print(hparams)
#print("params list")
#for parameter in model.parameters():
# print(parameter)
trainer = pl.Trainer(
fast_dev_run = True
#gpus=10,
#distributed_backend='dp',
#max_epochs=500,
#early_stop_callback=False,
#val_check_interval=100,
#show_progress_bar=False
)
trainer.fit(model)
torch.manual_seed(0) np.random.seed(0)
parser = argparse.ArgumentParser()
parser.add_argument("--batch_size", type=int, default=1024, help="size of the batches") parser.add_argument("--lr", type=float, default=1e-2, help="learning rate") parser.add_argument("--in_channels", type=int, default=17, help="in channels") parser.add_argument("--out_channels", type=int, default=100, help="out channels") parser.add_argument("--kernel_size", type=int, default=3, help="kernel size")
args, _ = parser.parse_known_args() print("Parameters:") print(args) main(args)
Hi Rohit, After restarting the kernel, the issue went away. Thanks for your help. Will close the issue now. /Said
I have a similar problem connected to using pl.metrics.functional package for the 'dice_score'. Model was previously training fine with BCELoss, but when I switched to dice_score I get this same error.
Does this mean that PL version of the dice_score is not capable of back-prop and can only be used as a fitness measure and not a loss function?
Hi,
Got RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Below details:
Got the following error: