Open lightingghost opened 6 years ago
Any news? I saw no updates in the Jira issue
@lonelykid We are actively working on the higher order gradient feature right now. We will update you once the PR is ready for review. Thanks for your patience.
@lonelykid PR is out for review https://github.com/apache/incubator-mxnet/pull/14613 Your comments are appreciated.
What is the current status of support for second order derivatives in Gluon? I tried implementing the method from the paper Improved Training of Wasserstein GANs, but the training program returns an error when I add the gradient penalty to the loss function and do a back propagation. I noticed that, with mxnet version 1.7, it works for Dense layers without activation, but e.g. Conv2D and many other layers seem still unsupported. I saw a similar question here #5982, but that was around 3 years ago.
Are there plans to add second order derivative support for e.g. gluon.nn.Conv2D, gluon.nn.BatchNorm, gluon.nn.Activation, gluon.nn.LeakyReLU?
Same here on MXNet 1.8.0.rc2. I'm trying to implement cBiGAN which is composed of residual blocks and follows WGAN-GP training procedure.
As I saw mxnet has
autograd
package to support high order gradients, I tried to implement wgan-gp with mxnet. But I got an error ofIt seems convolution operator still does not support higher order gradients?