facebook / Ax

Adaptive Experimentation Platform
https://ax.dev
MIT License
2.35k stars 303 forks source link

Question : Generation 12 trials #2553

Closed Fa20 closed 2 months ago

Fa20 commented 3 months ago

Hallo Ax Team, I have 12 samples that satisfy the parameters constraints can I use them instead of the one generated by Ax to speed up the process .I mean to attach them and then evaluated ?

Abrikosoff commented 3 months ago

In principle you should attach and evaluate these 12 samples before you initiate generation, in which case you can choose to bypass Sobol sampling and go directly to BoTorch.

Fa20 commented 3 months ago

@Abrikosoff thank you so much. How can I do could you please show by example E.g. In case of multiobjective functions as in the tutorial Using API services First we will create an experiment and then define evaluate functions And then for loop to generate the trial but I need to use my data which are 12 generated samples for the range of parameters and then evaluate them in Ax and continue to Botorch step

Fa20 commented 3 months ago

@Abrikosoff by the way the 12 generated samples that I have also generated Random using sobol method but outside AX and satisfy the parameters constraints

Fa20 commented 3 months ago

@Abrikosoff do you mean like that


<<     
from ax.modelbridge.dispatch_utils import choose_generation_strategy
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.modelbridge_utils import get_pending_observation_features
from ax.modelbridge.registry import ModelRegistryBase, Models

from ax.utils.testing.core_stubs import get_branin_experiment, get_branin_search_space
gs_subs = GenerationStrategy(
   steps=[
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1,
            max_parallelism=3,

        ),
    ],
)
ax_client = AxClient(gs_subs)
ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objectives={"hartmann6": ObjectiveProperties(minimize=True)},
    parameter_constraints=["x1 + x2 <= 2.0"],  # Optional.
    outcome_constraints=["l2norm <= 1.25"],  # Optional.
)

import numpy as np

def evaluate(parameterization):
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}

more_trials=[{'x1': 0.737728, 'x2': 0.670887, 'x3': 0.827904, 'x4': 0.500866, 'x5': 0.602531, 'x6': 0.778949},{'x1': 0.081486, 'x2': 0.543186, 'x3': 0.371315, 'x4': 0.612374, 'x5': 0.329075, 'x6': 0.539914},
            {'x1': 0.003824, 'x2': 0.409721, 'x3': 0.801538, 'x4': 0.118753, 'x5': 0.574206, 'x6': 0.059005},
           {'x1': 0.054105, 'x2': 0.399132, 'x3': 0.417247, 'x4': 0.530224, 'x5': 0.232824, 'x6': 0.576554},

           {'x1': 0.054432, 'x2': 0.339886, 'x3': 0.428452, 'x4': 0.491073, 'x5': 0.183822, 'x6': 0.611097},
            {'x1': 0.829043, 'x2': 0.127095, 'x3': 0.50696, 'x4': 0.629747, 'x5': 0.59874, 'x6': 0.169405},
            {'x1': 0.958742, 'x2': 0.457417, 'x3': 0.548611, 'x4': 0.780217, 'x5': 0.520459, 'x6': 0.249077},
            {'x1': 0.555118, 'x2': 0.159028, 'x3': 0.77028, 'x4': 0.324063, 'x5': 0.555323, 'x6': 0.996741},
            {'x1': 0.976046, 'x2': 0.120265, 'x3': 0.711264, 'x4': 0.819295, 'x5': 0.220924, 'x6': 0.159485} 

           ]

for i, trial_params in enumerate(more_trials):
        params, trial_index = ax_client.attach_trial(parameters=trial_params)
        result = evaluate(trial_params)
        ax_client.complete_trial(trial_index= trial_index, raw_data=result)               
for i in range(6):
    parameterization, trial_index = ax_client.get_next_trial()
    # Local evaluation here can be replaced with deployment to external system.
    ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))   

  >>```
Fa20 commented 3 months ago

@danielcohenlive Could you plz help here?thanks

Abrikosoff commented 3 months ago

@Abrikosoff do you mean like that

<<     
from ax.modelbridge.dispatch_utils import choose_generation_strategy
from ax.modelbridge.generation_strategy import GenerationStep, GenerationStrategy
from ax.modelbridge.modelbridge_utils import get_pending_observation_features
from ax.modelbridge.registry import ModelRegistryBase, Models

from ax.utils.testing.core_stubs import get_branin_experiment, get_branin_search_space
gs_subs = GenerationStrategy(
   steps=[
        GenerationStep(
            model=Models.BOTORCH_MODULAR,
            num_trials=-1,
            max_parallelism=3,

        ),
    ],
)
ax_client = AxClient(gs_subs)
ax_client.create_experiment(
    name="hartmann_test_experiment",
    parameters=[
        {
            "name": "x1",
            "type": "range",
            "bounds": [0.0, 1.0],
            "value_type": "float",  # Optional, defaults to inference from type of "bounds".
            "log_scale": False,  # Optional, defaults to False.
        },
        {
            "name": "x2",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x3",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x4",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x5",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
        {
            "name": "x6",
            "type": "range",
            "bounds": [0.0, 1.0],
        },
    ],
    objectives={"hartmann6": ObjectiveProperties(minimize=True)},
    parameter_constraints=["x1 + x2 <= 2.0"],  # Optional.
    outcome_constraints=["l2norm <= 1.25"],  # Optional.
)

import numpy as np

def evaluate(parameterization):
    x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
    # In our case, standard error is 0, since we are computing a synthetic function.
    return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x**2).sum()), 0.0)}

more_trials=[{'x1': 0.737728, 'x2': 0.670887, 'x3': 0.827904, 'x4': 0.500866, 'x5': 0.602531, 'x6': 0.778949},{'x1': 0.081486, 'x2': 0.543186, 'x3': 0.371315, 'x4': 0.612374, 'x5': 0.329075, 'x6': 0.539914},
            {'x1': 0.003824, 'x2': 0.409721, 'x3': 0.801538, 'x4': 0.118753, 'x5': 0.574206, 'x6': 0.059005},
           {'x1': 0.054105, 'x2': 0.399132, 'x3': 0.417247, 'x4': 0.530224, 'x5': 0.232824, 'x6': 0.576554},

           {'x1': 0.054432, 'x2': 0.339886, 'x3': 0.428452, 'x4': 0.491073, 'x5': 0.183822, 'x6': 0.611097},
            {'x1': 0.829043, 'x2': 0.127095, 'x3': 0.50696, 'x4': 0.629747, 'x5': 0.59874, 'x6': 0.169405},
            {'x1': 0.958742, 'x2': 0.457417, 'x3': 0.548611, 'x4': 0.780217, 'x5': 0.520459, 'x6': 0.249077},
            {'x1': 0.555118, 'x2': 0.159028, 'x3': 0.77028, 'x4': 0.324063, 'x5': 0.555323, 'x6': 0.996741},
            {'x1': 0.976046, 'x2': 0.120265, 'x3': 0.711264, 'x4': 0.819295, 'x5': 0.220924, 'x6': 0.159485} 

           ]

for i, trial_params in enumerate(more_trials):
        params, trial_index = ax_client.attach_trial(parameters=trial_params)
        result = evaluate(trial_params)
        ax_client.complete_trial(trial_index= trial_index, raw_data=result)               
for i in range(6):
    parameterization, trial_index = ax_client.get_next_trial()
    # Local evaluation here can be replaced with deployment to external system.
    ax_client.complete_trial(trial_index=trial_index, raw_data=evaluate(parameterization))   

  >>```

I think that should be fine?

Fa20 commented 3 months ago

@Abrikosoff and in case that someone need 5 trial using Sobol and then go to Botorch do we need to update gs_subs = GenerationStrategy( steps=[ GenerationStep( model=Models.BOTORCH_MODULAR, num_trials=-1, max_parallelism=3,

    ),
],

) and how?

Abrikosoff commented 3 months ago

Defining something like this should work:

generation_strategy = GenerationStrategy(
                        steps=[
                            GenerationStep(
                                model=Models.SOBOL,
                                num_trials=5,
                            ), 
                            GenerationStep(
                                model=Models.BOTORCH_MODULAR,
                                num_trials=-1,
                            ),
                        ]
                    )