Closed bmkor closed 6 years ago
Absolutely. It will also requires to change the number of hidden neurons in the previous fully connected operator to 1 (instead of the number of labels).
Thanks. The embedding layer will not be needed right? Also, will the train data be in the shape (batch.size, num.variates * seq.len)?
If your data needs no embedding (or has use pre-embedded), then the data feed should be Features X Batch X Seq.Length. You can refer to the shapes of the graph in the Readme following the Embedding operator.
Thanks. Wondering if the shape Batch X Seq.Length shown in the graph is different from the Seq.Length X Batch used in the tutorial Char RNN example?
It is effectively different from the tutorial, the RNN refactor involves some changes compared to current RNN functionnalities in the R package. The reason being that the Batch X Seq.Length is the format expected by the the symbol.RNN operator. I also suggest to consider the data preparation presented in this bucketing demo as it performs the indexing of words using a vectorized apprach rather than with loops which is much faster.
Thanks a lot! Wanna know if I can try to make it capable of doing regression by a pull request?
Hello, there were a few things I changed following various comments and I think the following push should make the regression easier to perform: https://github.com/apache/incubator-mxnet/pull/8121
Feel free to bring changes/comment or suggest another approach.
Cool. Definitely will try.
Wanna know if I can change softmaxoutput to regression(linear) to do regression?