-
Consider the code:
```
using SparseArrays
n = 5
r = rand(n, n)
h = sprandn(rng, n, n, 0.1)
gradient(h -> tr(r * h), h)[1]
```
Currently this returns a dense `Array`. Is it possible to have i…
-
I am trying to create a new (Float64) vector based on an input.
Here is a simplified example:
```
julia> function f(x)
L1 = 1.0
L2 = 100.0
n1 = x
while n1 > L1
n1 /= 2.0
end
…
-
I'm curious what type of functions one should be able `grad` over. i.e. what is the implicit restriction on `a`
`def grad (f:a->Float) (x:a) : a = snd (vjp f x) 1.0`
Currently it seems to work f…
srush updated
2 years ago
-
Not sure this deserves a new issue. Related to https://github.com/EnzymeAD/Enzyme.jl/issues/144 and https://github.com/EnzymeAD/Enzyme.jl/issues/265 , I'm trying to get `reduce` to work with `CUDA.jl`…
-
Hi,
I tried to optimize alpha_u/alpha_v of the roughtconductor BRDF, the environment map illumination and the mesh (separately). I did not manage to optimize the parameters with the Adam optimizer …
-
I mentioned this to Rachael the other day on Twitter, but might be useful to add here:
Since autograd appears to be the primary reason for the Python dependency, I'd recommend taking a look at the…
-
Here's Stan code I put together. The `rng` function includes lower and upper bounds because I needed a truncated distribution but can be removed to be consistent with other `rng` functions.
```
…
-
I’m willing to put some effort in the ONNX story if there exist some interest.
TBH I don’t really know what fastAI is about so if there is some special meaning to ONNX + fastAI then please let me k…
-
Enzyme's integration into Rust should give the user
- precise control of what are constant, primary and adjoint variables
- make it easy to mark a function as a candidate for differentiation
- i…
-
As noticed in https://github.com/google/jax/pull/3398:
```
In [9]: from jax import vjp
In [10]: out, f_vjp = vjp(lambda x: 1j * x, 1.0)
In [11]: f_vjp(1 + 0j) # wrong!
Out[11]: (DeviceArray(…