renmengye / tensorflow-forward-ad

Forward-mode Automatic Differentiation for TensorFlow
MIT License
139 stars 18 forks source link

Errors Running Readme Example #1

Open WihanB opened 7 years ago

WihanB commented 7 years ago

Hi,

First of all, thank you so much for creating this, I really think it will be incredible for research into natural gradient methods for deep learning.

I was trying to run your first example and it seems as though there is a small bug in the creation of your x, as there is a dimensionality mismatch, I believe the x should be tf.ones([8,10]) in addition there does not seem to be a registered forward gradient for the loss function which you have created.

Regards Wihan

renmengye commented 7 years ago

Hi Wihan,

Thanks for reporting the bug. I have fixed the doc using tf.sparse_softmax_cross_entropy_with_logits instead. I will work on an update that includes the gradient for softmax_cross_entropy_with_logits.

Mengye