JuliaSIMD / Polyester.jl

The cheapest threads you can find!
MIT License
241 stars 13 forks source link

Threaded Jacobians and Hessians #50

Open alanderos91 opened 2 years ago

alanderos91 commented 2 years ago

Hello! I am interested in using Polyster.jl to evaluate Jacobians and Hessians in parallel, similar to threaded_gradient! over here using the ForwardDiff API. A parallel Jacobian is a higher priority for myself and my collaborators.

We'd be happy to contribute a PR to implement this but I'm unsure of exactly what needs to be done and what tests ought to be included.

chriselrod commented 2 years ago

Hello! I am interested in using Polyster.jl to evaluate Jacobians and Hessians in parallel, similar to threaded_gradient! over here using the ForwardDiff API. A parallel Jacobian is a higher priority for myself and my collaborators.

We'd be happy to contribute a PR to implement this but I'm unsure of exactly what needs to be done and what tests ought to be included.

Hi, that'd be great. Once you've implemented a threaded Jacobian function, the threaded hessian should be trivial (i.e., take the threaded Jacobian of a [non-threaded] gradient). The threaded gradient code was based on the chunk mode gradient code in ForwardDiff.jl. The basic idea is to break up the gradient vector among threads, and then have the threads evaluate code for their local chunks.

So I'd recommend the same basic approach. You can use the chunk mode jacobian code as a reference. Basically, divide the input vector up among threads (i.e. you're splitting up the inputs by which ones become dual numbers), and then have each thread iterate over their local chunks.

I would also like the Jabobian function to also store the vector output/make it available in case anyone using it needs it, to save them from needing to reevaluate the function.