Closed odow closed 2 weeks ago
Thanks for the help, Oscar.
My pseudo code on the slide had a mistake. It should be building the JuMP model in each loop. (I was doing this part correctly in my actual code.)
The image below is a better representation of what I’m doing in my code. Does this look correct?
Do I also need to make a copy of the MathOptAI.Pipeline object in each iteration of the for loop?
Perhaps I should instead store the results to hard drive within the function _build_and_solve()?
Oh, yeah, that looks better.
I think everything is correct now.
The threading issue calling into Python is likely related to the GIL? Python isn't threaded, so it makes sense that our connection has some issues.
@mjgarc has a model where he solves a bunch of models with threading.
There's a need to lift the PyTorchModel out of the threading loop.
We should also check that a unique JuMP model is being built in each loop. (Perhaps I mis-read the slide.)
See https://jump.dev/JuMP.jl/dev/tutorials/algorithms/parallelism/#With-multi-threading