-
OK, so I'm going through the back-prop implementation, which is implemented in MultiLayerNetwork.doBackward() and BaseLayer.backwardGradient().
There are a number of significant issues here.
First I'…
-
Hello,
Is there any implementation multilayer networks (or multiplex, hypergraph, etc.)? I would really appreciate such.
-
I'm looking how to create multilayer networks on `networkx` but I think it's not possible because node index are dictionaries and I can't merge two different graphs. What should I try?
-
#### Issue Description
In developing a custom loss function based on Bishop's mixture density networks (http://publications.aston.ac.uk/373/1/NCRG_94_004.pdf), I ran across a problem during testing…
-
I'm building a setup where I train a RNN using CUDA with a relatively long time series (1441 steps) and then doing predictions using the CPU. Training is working fine and the memory consumption is alr…
-
One way I look at dl4j is a repository of NN components that I can experiment different NN configurations.
I believe having a traditional multiLayer perceptron implementation could help with that.
Th…
-
https://github.com/Microsoft/vcpkg/issues/4928
Fast Artificial Neural Network (FANN) Library is a free open source neural network library, which implements multilayer artificial neural networks in …
-
```
ComputationGraphConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(seed)
.iterations(iterations)
.regularization(true).l2(0.000…
-
Currently if you call gradients(ys, xs), it will return the sum of dy/dx over all ys for each x in xs. I believe this doesn't accord with an a priori mathematical notion of the derivative of a vector.…
-
Here is the updated code ->
https://github.com/rahul-raj/Deeplearning4J/blob/master/src/main/java/HyperParamTuning.java
And here is the stacktrace after executing the code:
```
12:01:34.712 [mai…