Open gdalle opened 5 months ago
Don't define the pipelines. We can put stuff in an InferOpt extension relying on InferOptBenchmarks to train our pipelines.
Imagine we define these benchmarks for a comparison between InferOpt and a competitor
InferOptBenchmarks should not depend on InferOpt. Maybe we should rename it to "DecisionFocusedLearningBenchmarks".
Take inspiration from https://github.com/JuliaSmoothOptimizers/OptimizationProblems.jl
In the docs and tests of this package, use a black box optimizer in the pipeline (no autodiff shenanigans) to learn without depending on InferOpt
Today's meeting notes:
rng
to model generation because Flux networks contain their parameters.instance
as a kwarg to model
. Flux doesn't allow it but GraphNeuralNetworks would for example.generate_maximizer
, use callable struct instead.theta
and friends don't have to be arrays all the time.compute_gap
into average_gap
.AbstractArray
, Nothing
wil error anyway.
Interface
generate_blabla
doesgenerate_maximizer
does not return a differentiable layergenerate_maximizer
the signature (args and kwargs) of the returned closureGetting data
Data sources
Problem meaning
Varying instance sizes
ShortestPathBenchmark
to draw a random grid size from specified ranges of height and width, then see what you need in the interface to make it work