-
Hi Sir,
I have some question about your LeNet-5 MATLAB project. Thanks for your answer.
1, In backpropagation.m why loss doesn't as an input be used? Due to loss doesn't use, so Lable will not ta…
-
Tried to train a model with unpooling layer, but unpooling backpropagation seems to be not implemented (8308 vision_layers.hpp:686] Not Implemented Yet)
-
Hi,
Thank you for maintaining great package.
I want to simulate relatively larger system (~10000 atoms) using tensornet.
After I finished to train model using [TensorNet-SPICE.yaml](https://git…
-
Hi, is there any specific reason that you did the back propagation one layer at a time?
-
prerequisite knowledge
# Chain Rule
![image](https://user-images.githubusercontent.com/34474924/235067714-f2c04f9f-5142-43d3-9b67-470832d16955.png)
______________
# Backpropagation
正常算gradi…
-
Dear Team,
Thanks for this awesome Repo :)
I wanted to suggest a way for extracting Latent-Spaces Embeddings and the backpropagation of losses based on this. This could be done with a encode(x) fu…
-
Hi, thanks for the implementation!
I hope to adopt your algorithm in a deep learning framework, which requires backpropagation in Pyotrch. Nevertheless, your algorithm is written in numpy. I suppo…
-
Just wanted to create a tracking issue for this warning: https://github.com/getkeops/keops/blob/7d68fd9da5887c217937a35210c740b2b9f9f79d/pykeops/pykeops/tutorials/a_LazyTensors/plot_lazytensors_a.py#L…
-
Currently only stochastic gradient descent is supported, at the very minimum it would be nice to support:
- [ ] RMSProp
- [x] Adam
- [x] SGD with Momentum
- [x] SGD with Nesterov Momentum
-
Not really just a new rule, just a better one?
https://papers.nips.cc/paper/8579-backpropagation-friendly-eigendecomposition.pdf