-
### Expected behavior
I want to call `qml.adjoint(qml.DepolarizingChannel)(0.1, wires=0)` with the `default.mixed` device. I expect the circuit to perform the adjoint of qml.DepolarizingChannel
### …
-
**Describe the bug 🐞**
Taking the gradient with respect to a vector of parameter values (which are replaced into the parameter object) is not working
with `MTKParameters`
**Expected behavior**
…
-
This might solve some of the blur inside of the `bwd` pass !
For example some inspiration for those are :
- https://github.com/patrick-kidger/diffrax/blob/main/diffrax/_adjoint.py
- https://githu…
-
This is part of INLA roadmap #340.
From the Stan [paper](https://arxiv.org/abs/2004.12550):
>One of the main bottlenecks is differentiating the estimated mode, $\theta^* $. In theory, it is stra…
-
```
julia> typeof(E)
Symbolics.Arr{Num, 2}
julia> typeof(d)
Vector{Num} (alias for Array{Num, 1})
julia> ex2 = d' * inv(E) * d
ERROR: MethodError: *(::Adjoint{Num, Vector{Num}}, ::Symbolics.…
-
[This paper on EventProp](https://arxiv.org/abs/2009.08378) by @cpehle presents a super cool adjoint method for SNN optimisation.
We should definitely have this as the default gradient approach in …
-
We are experimenting some models/architectures inspired by the NODE model. Given a point (t,x), the idea is to solve an ODE system whose definition uses a neural network (and also its derivative) and …
-
Hello I was measuring the performance between of torchdiffeq odeint and your symplectic odeint and I get following error in my code segment:
```
class Lambda(nn.Module):
def forward(self,…
-
Example code copied from README not working (julia-1.3.0, Calculus-0.5.1):
```julia
julia> using Calculus
julia> f(x) = sin(x)
f (generic function with 1 method)
julia> f'(1.0) - cos(1.0)
ER…
-
In the continuous adjoint equations, the gradient of a functional w.r.t. parameters is also depending on how the _initial_ states of the system ODE depend on those parameters. If I understand correctl…