Closed gregbarton closed 2 years ago
This issue is also present on the Deeplearning4j master branch.
Removing the precondition at org.deeplearning4j.nn.layers.recurrent.RnnOutputLayer.backpropGradient (RnnOutputLayer.java:59) allows the example to run, and it appears to run successfully.
@gregbarton if you want feel free to submit a pull request if you have the fix and we can get you credit. I will be happy to review! Thanks! Please migrate this over to the main dl4j repo as that gets more traffic than this one.
Following up from reddit. Still the same issue. Not clear as to why yet.
In your run https://gist.github.com/agibsonccc/e89d4bb08e3b94c65833b96e6c4945ea you ran org.deeplearning4j.examples.advanced.modelling.charmodelling.generatetext.GenerateTxtModel and not GenerateTxtCharCompGraphModel. GenerateTxtModel works like a champ for me.
@gregbarton ah sorry for the confusion! Let me take a look again.
@gregbarton I see the error now and will be able to fix this. Thanks for highlighting!
Issue Description
The GenerateTxtCharCompGraphModel example seems to be broken. After getting around the missing input data ("java.io.IOException: Server returned HTTP response code: 403 for URL: https://s3.amazonaws.com/dl4j-distribution/pg100.txt", changed url to https://www.gutenberg.org/cache/epub/100/pg100.txt to fix), the ComputationGraphConfiguration appears to be misconfigured. Running the example results in the following error:
Input size to "outputLayer" is correctly set to 2*lstmLayerSize, and the output sizes of both layers flowing to "outputLayer" are lstmLayerSize each. This appears to be the correct configuration. But the network seems to be expecting the input size (i.e. "iter.inputColumns()") at that point.
How can this be fixed?
Version Information
Please indicate relevant versions, including, if relevant: