apricot implements submodular optimization for the purpose of selecting subsets of massive data sets to train machine learning models quickly. See the documentation page: https://apricot-select.readthedocs.io/en/latest/index.html
This PR reorganizes the code to decouple the optimizer from the selection object. The goal is that the selection object should encode the function, calculate the gains, and store the selected subset. This allows one to code their own optimizers and apply it to any function, or to code the function without needing to know how to write an efficient optimizer. This PR also adds in mixtures of submodular functions and the bidirectional greedy algorithm for non-monotonic submodular functions.
This PR reorganizes the code to decouple the optimizer from the selection object. The goal is that the selection object should encode the function, calculate the gains, and store the selected subset. This allows one to code their own optimizers and apply it to any function, or to code the function without needing to know how to write an efficient optimizer. This PR also adds in mixtures of submodular functions and the bidirectional greedy algorithm for non-monotonic submodular functions.