lanl-ansi / MathOptAI.jl

Embed trained machine learning predictors in JuMP
https://lanl-ansi.github.io/MathOptAI.jl/
Other
29 stars 1 forks source link

Threading and PyTorch #116

Closed odow closed 2 weeks ago

odow commented 2 weeks ago

@mjgarc has a model where he solves a bunch of models with threading.

There's a need to lift the PyTorchModel out of the threading loop.

We should also check that a unique JuMP model is being built in each loop. (Perhaps I mis-read the slide.)

See https://jump.dev/JuMP.jl/dev/tutorials/algorithms/parallelism/#With-multi-threading

mjgarc commented 2 weeks ago

Thanks for the help, Oscar.

My pseudo code on the slide had a mistake. It should be building the JuMP model in each loop. (I was doing this part correctly in my actual code.)

The image below is a better representation of what I’m doing in my code. Does this look correct?

Do I also need to make a copy of the MathOptAI.Pipeline object in each iteration of the for loop?

Perhaps I should instead store the results to hard drive within the function _build_and_solve()?

image

odow commented 2 weeks ago

Oh, yeah, that looks better.

I think everything is correct now.

odow commented 2 weeks ago

The threading issue calling into Python is likely related to the GIL? Python isn't threaded, so it makes sense that our connection has some issues.