-
I recently came across this paper from last years NeurIPS
https://ghliu.github.io/assets/pdf/neurips-snopt-slides.pdf
https://github.com/ghliu/snopt
I was wondering if there are any plans to …
-
Hello,
I am having trouble understanding how to set up the logp function for my use case, specifically the gradient term. I have managed to implement my model using [emcee](https://docs.rs/emcee/la…
-
@mlubin: I saw the work on `Dual4` you started, and that looks VERY exciting to me. I assume you eventually want to use tuples (i.e. fixed size arrays) to store the epsilons, so that this would essent…
-
One of the main motivations for adding ```defvjp``` to Jax was to add the [adjoint method](https://github.com/google/jax/blob/master/jax/experimental/ode.py#L274) for taking gradients through ODE solu…
-
Currently only gradients can be computed via the `RustQuant::autodiff` module.
Adding support for full Jacobians and also higher-order derivatives like the Hessian would be nice.
-
```julia
julia> derivative(x->x*derivative(y->x+y,1),1)
ERROR: MethodError: no method matching exprtype(::Core.Compiler.IRCode, ::String)
Closest candidates are:
exprtype(::Core.Compiler.IRCode,…
-
I'm writing this issue to capture some of the ideas mentioned in a private discussion with @wsmoses about the future C++ interface. I'm hoping that this will increase visibility and allow other contri…
-
## Description
Using known identities is a powerful way to test precision of our functions. Currently using identities for derivatives is a bit tedious - you need to explicitly compute derivative aga…
-
Here is an MWE:
```julia
using DifferentialEquations
using Unitful
function ode_system!(du, u, p, t)
R0, τ, Tref = p
T = u[1]*u"K"
dTdt = -T / (1 + R0*(Tref - T)) / τ
…
-
Some functions have sparse Jacobian or sparse Hessian and it can be usefull to obtain them as sparse matrices rather than accessing to the values through vector-jacobian or vector-hessian products fu…