SciML / SparsityDetection.jl

Automatic detection of sparsity in pure Julia functions for sparsity-enabled scientific machine learning (SciML)
MIT License
59 stars 12 forks source link

Using a dictionary to reference indices of the input #24

Open chenwilliam77 opened 4 years ago

chenwilliam77 commented 4 years ago

The following code yields the identity, as it should

function f1(dx, x)
    for i in 1:length(x)
        dx[i] = x[i]^2
    end
end
input = rand(10)
output = similar(input)
sparsity_pattern1 = sparsity!(f1, output, input)

However, this code does not seem to work. It returns zeros everywhere instead of trues along the diagonal.

k = Dict(i => i for i in 1:10)
function f2(dx, x)
    for i in 1:length(x)
        dx[i] = x[k[i]]^2
    end
end
input = rand(10)
output = similar(input)
sparsity_pattern2 = sparsity!(f2, output, input)

The envisioned use case is when you have a dictionary of Symbol values mapped to indices so that users can reference the particular index of a vector or a matrix by the name of what the entry is meant to be. An example package which extensively uses this approach is DSGE.jl.

Note, I'm using Julia 1.1.

ChrisRackauckas commented 4 years ago

Thank you for the report. Indeed this would be good to handle.

BTW, for DSGE.jl, we have a full solution coming which analytically builds the derivatives via ModelingToolkit. I don't know if you've talked with Jesse about this.

chenwilliam77 commented 4 years ago

For DSGE.jl, that's great to hear! Jesse has mostly talked to us (way back in early fall) about some of the new sparse autodiff tools that have been implemented, but nothing about analytically building derivatives. Could you explain some more about them? What are the use cases you are envisioning with these new additions to ModelingToolkit.

ChrisRackauckas commented 4 years ago

We're building analytical solutions to the derivatives in order to utilize accelerated fitting methods which require fast and accurate derivatives.