facebook / Ax

Adaptive Experimentation Platform
https://ax.dev
MIT License
2.38k stars 312 forks source link

[GENERAL SUPPORT]: SEBO with parameter constraints #2790

Open souravdey94 opened 2 months ago

souravdey94 commented 2 months ago

Question

I am trying the predict chemical reaction rates in different solvent combinations. I want to use SEBO because the parameter space can contain upto 30 solvents and in most cases the there are only 3 to 4 important solvents. Since, it is a composition problem, I need to use parameter constraints. But SEBO with parameter constraint is not implemented in Ax. Can you suggest me a work around?

I have added a code snippet of the generation strategy and experiment section.

Please provide any relevant code snippet if applicable.

length = len(solvent_names_minus1)
    print('length', length)

    torch.manual_seed(12345)  # To always get the same Sobol points
    tkwargs = {
    "dtype": torch.double,
    "device": torch.device("cuda" if torch.cuda.is_available() else "cpu"),
    }

    target_point = torch.tensor([0 for _ in range(length)], **tkwargs)
    print('target_point', target_point)

    SURROGATE_CLASS = SaasFullyBayesianSingleTaskGP

ax_client.create_experiment(
        name="solventproject",

        parameters=[
            {
                "name": solvent_names_minus1[i],
                "type": "range",
                "bounds": [float(range_min_minus1[i]), float(range_max_minus1[i])],
                "value_type": "float",  # Optional, defaults to inference from type of "bounds".
                "log_scale": False,  # Optional, defaults to False.
            }
            for i in range(len(solvent_names_minus1))
        ],
        objectives={"blend_score": ObjectiveProperties(minimize=False)},
        parameter_constraints=[sum_str],  # Optional.
        outcome_constraints=["lnorm <= 0.00"],  # Optional.
    )

gs = GenerationStrategy(
    name="SEBO_L0",
    steps=[

        GenerationStep(  # BayesOpt step
            model=Models.BOTORCH_MODULAR,
            # No limit on how many generator runs will be produced
            num_trials=-1,
            model_kwargs={  # Kwargs to pass to `BoTorchModel.__init__`
                "surrogate": Surrogate(botorch_model_class=SURROGATE_CLASS),
                "acquisition_class": SEBOAcquisition,
                "botorch_acqf_class": qNoisyExpectedHypervolumeImprovement,
                "acquisition_options": {
                    "penalty": "L0_norm", # it can be L0_norm or L1_norm.
                    "target_point": target_point, 
                    "sparsity_threshold": length,
                },
            },
        )
    ]
)

Code of Conduct

sdaulton commented 2 months ago

in most cases the there are only 3 to 4 important solvents

Is it bad if the suggest arms include more than 3-4 or is this just prior knowledge you want to include? Note: using a SAAS model already encodes the prior the only a few parameters are relevant, so unless you specifically want to avoid generating arms that change many parameters, sparse BO is probably not needed.

Regarding using sparse BO, it looks like optimizing the L0 objective using homotopy does not support parameter constraints. There isn't a fundamental reason by one couldn't though. Some options would be: 1) Use L1_norm instead of L0_norm. This may not lead to the most sparse results, but can be used out of the box. (https://github.com/facebook/Ax/blob/a144287172e74bb5b10376548d0482d5a0ff3507/ax/models/torch/botorch_modular/sebo.py#L241-L265) 2) implement support for parameter constraints in optimize_with_homotopy (https://github.com/facebook/Ax/blob/a144287172e74bb5b10376548d0482d5a0ff3507/ax/models/torch/botorch_modular/sebo.py#L277) 3) Allow setting a fixed parameter value in differentiable relaxation for the L0 norm, and optimize without homotopy. This would require adding another argument that allows one to differentiate what norm to use from how to optimize it, since currently these are coupled.

CompRhys commented 1 month ago

https://github.com/pytorch/botorch/pull/2588/files extends _optimize_with_homotopy to include the constraints typically available in optimize_acqf which I believe addresses suggestion 2. here.

souravdey94 commented 1 month ago

Has it been implemented in Botorch?

https://github.com/pytorch/botorch/pull/2588/files extends _optimize_with_homotopy to include the constraints typically available in optimize_acqf which I believe addresses suggestion 2. here.

Has it been implemented in Botorch? I am currently using output constraints to learn the composition constraints.