-
Have you guys given any thought to how to efficiently compute higher-order derivatives of scalar-input functions? I have a use for 4th- and 5th-order derivatives of a scalar-input, vector-output func…
-
I'm going to use this as a summary/burndown list of known problems, in no particular order.
More to come.
---
~~**Issue 1: broadcast_dynamic_shape**~~
~~https://github.com/deeplearning4j/deeplea…
-
Taking the ForwardDiff derivative through a chain of dense layers gives a stackoverflow:
```
F = Chain(t->[t],Dense(1,10,σ),Dense(10,10,σ),Dense(10,10))
Ft = mapleaves(Flux.data,F)
ForwardDiff…
-
This roadmap for TVM v0.6. TVM is a community-driven project and we love your feedback and proposals on where we should be heading. Please open up discussion in the discussion forum as well as bring R…
-
remember to add
```
Optim also supports [reverse mode](https://github.com/JuliaDiff/ReverseDiff.jl) automatic differentiation, which is enabled by adding `autodiff = :reverse` to the constructor.
`…
-
## Description
positive ordered, ordered, and simplex constraints do not currently have custom reverse mode autodiff implementations. custom autodiff would be nice cause it would make them faster. Th…
-
Hi!
I was wondering if you plan to support complex duals in the future? I wanted to use your library, but this would be a big requirement for me.
Thanks for this amazing work! Cheers,
David
-
I'm getting strange behavior when trying to calculate a gradient. I wonder if it's because in my objective function I use calculate derivatives.
To explain: I see the way one calculates involves _s…
-
Hi, I wondered how does your loss.backward() work with Riemannian Gradient. I think the egrad2rgrad() may be an essential function, but it seems it is not used in any code. Do we need to extend torch.…
-
Hi Chris, tried to figure this out but can't find it in Examples or Models.
Recently I had a short discussion with Stijn de Waele about how useful it is to run say 4 chains with 4 different observa…