Closed mathfinder closed 7 years ago
I want to recurrent this Method with torch.But I don't no how to compute the gradient of 'the penalty of gradient' to update D's parameters. Any suggestions?
I don't believe there's currently a way of implementing it in torch unless they add functionality, unfortunately.
I want to recurrent this Method with torch.But I don't no how to compute the gradient of 'the penalty of gradient' to update D's parameters. Any suggestions?