vincentherrmann / pytorch-wavenet

An implementation of WaveNet with fast generation
MIT License
968 stars 225 forks source link

ConstantPad1d Deprecated #44

Open jaytimbadia opened 2 years ago

jaytimbadia commented 2 years ago

Hey,

Can you please help?

I am facing Runtime error for this function.

`` class ConstantPad1d(Function): def init(self, target_size, dimension=0, value=0, pad_start=False): super(ConstantPad1d, self).init() self.target_size = target_size self.dimension = dimension self.value = value self.pad_start = pad_start

def forward(self, input):
    self.num_pad = self.target_size - input.size(self.dimension)
    assert self.num_pad >= 0, 'target size has to be greater than input size'

    self.input_size = input.size()

    size = list(input.size())
    size[self.dimension] = self.target_size
    output = input.new(*tuple(size)).fill_(self.value)
    c_output = output

    # crop output
    if self.pad_start:
        c_output = c_output.narrow(self.dimension, self.num_pad, c_output.size(self.dimension) - self.num_pad)
    else:
        c_output = c_output.narrow(self.dimension, 0, c_output.size(self.dimension) - self.num_pad)

    c_output.copy_(input)
    return output

def backward(self, grad_output):
    grad_input = grad_output.new(*self.input_size).zero_()
    cg_output = grad_output

    # crop grad_output
    if self.pad_start:
        cg_output = cg_output.narrow(self.dimension, self.num_pad, cg_output.size(self.dimension) - self.num_pad)
    else:
        cg_output = cg_output.narrow(self.dimension, 0, cg_output.size(self.dimension) - self.num_pad)

    grad_input.copy_(cg_output)
    return grad_input

`` RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function) Can yo please help?

Timtti commented 2 years ago

I've had the same problem.

Timtti commented 2 years ago

Here is another example of a custom autograd. I'm currently rewriting this function myself. https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html

Timtti commented 2 years ago

Or maybe everyone encountering this problem should be using this branch instead https://github.com/Vichoko/pytorch-wavenet Since ConstantPad1D is now built into PyTorch https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html

MirkoDeVita98 commented 2 years ago

were you able to solve? I tried the other branch but I get the same error

YuLong-Liang commented 2 years ago

the same problem that we all meet has been solved by following steps:

  1. update the code "ConstantPad1d(nn.Module)" in 80 line of wavenet_modules.py.
  2. comment out the code "loss = loss.data[0]" in 73 line of wavenet_train.py

by the way, i run the wavenet_demo successfully. the env list is as follow: torch: 1.8.2+cu111 python: 3.7

YuLong-Liang commented 2 years ago

嘿,

你能帮忙吗?

我面临此功能的运行时错误。

`` 类 ConstantPad1d(Function): def init (self, target_size, dimension=0, value=0, pad_start=False): super(ConstantPad1d, self)。init () self.target_size = target_size self.dimension = 维度 self.value = 值 self.pad_start = pad_start

def forward(self, input):
    self.num_pad = self.target_size - input.size(self.dimension)
    assert self.num_pad >= 0, 'target size has to be greater than input size'

    self.input_size = input.size()

    size = list(input.size())
    size[self.dimension] = self.target_size
    output = input.new(*tuple(size)).fill_(self.value)
    c_output = output

    # crop output
    if self.pad_start:
        c_output = c_output.narrow(self.dimension, self.num_pad, c_output.size(self.dimension) - self.num_pad)
    else:
        c_output = c_output.narrow(self.dimension, 0, c_output.size(self.dimension) - self.num_pad)

    c_output.copy_(input)
    return output

def backward(self, grad_output):
    grad_input = grad_output.new(*self.input_size).zero_()
    cg_output = grad_output

    # crop grad_output
    if self.pad_start:
        cg_output = cg_output.narrow(self.dimension, self.num_pad, cg_output.size(self.dimension) - self.num_pad)
    else:
        cg_output = cg_output.narrow(self.dimension, 0, cg_output.size(self.dimension) - self.num_pad)

    grad_input.copy_(cg_output)
    return grad_input

`` RuntimeError:不推荐使用具有非静态转发方法的旧版 autograd 函数。请使用带有静态转发方法的新型 autograd 函数。(例如:https ://pytorch.org/docs/stable/autograd.html#torch.autograd.Function ) 你能帮忙吗?

see the lasted comment

YuLong-Liang commented 2 years ago

你能解决吗?我尝试了另一个分支,但我得到了同样的错误

can refer my comment