-
## Feature request
It would be great if Numba supported automatic differentiation. Maybe using [Enzyme](https://github.com/EnzymeAD/Enzyme) would be the easiest way as it operates directly on th…
-
We could have possibly the following examples:
- [ ] A simple U-Net (pulled from fastMRI dataset possibly), trained? @Lenoush can you handle this in your free time?
- [ ] A network built with DeepIn…
-
When I read the taichi doc, taichi support autodiff funcation by kernel_func.grad. So is it possible to directly use kernel_func.grad in the backward function during taichi-3dgs?
-
This is a tracking issue for the automatic differentiation experiment ("autodiff").
The feature gate for the issue will be `#![feature(autodiff)]`.
### About tracking issues
Tracking issues a…
-
the rand(Binomial(n, sampledval)) calls fail as Binomial can't accept autodiff tracker types.
-
Lots of boxes! Gotta check them all!
### Forward mode
- [ ] libxc interfacing
- [ ] upstream FFT stuff
- [ ] figure out the T = promote_type pattern and use it consistently
- [ ] make sure ever…
-
Hey there friends of JSO!
@adrhill and I have been hard at work on two autodiff toolkits which you might find useful:
- [DifferentiationInterface.jl](https://github.com/gdalle/DifferentiationInt…
-
Hi @gdalle,
Awesome package, thanks for making it!
I am considering whether to use this as the backend interface of [SymbolicRegression.jl](https://github.com/MilesCranmer/SymbolicRegression.jl)…
-
Hi @TimSiebert1!
I stumbled upon your [JuliaCon abstract](https://pretalx.com/juliacon2024/talk/AADQNR/) and it made me very excited about your package! I'm the lead developer (with @adrhill) of [D…
-
**Describe the bug**
Using the opengl or vulkan backends result in nonsensical values even in the most basic examples. CPU and cuda work fine.
**To Reproduce**
```py
import taichi as ti
…