facebookresearch / diffkt

A framework for automatic differentiation in Kotlin
MIT License
60 stars 6 forks source link

Add no grad ops for GPU #25

Open fizzxed opened 3 years ago

fizzxed commented 3 years ago

To enable derivatives on gpu via pytorch, ops need to return handles to the input arguments. This is not needed the base/GpuFloatScalarOps case. We could instead create some c++ bindings that don't do this so we dont have to worry about cleaning up these handles in kotlin code.

Ops affected as of writing: Matmul, plus