microsoft / FLAML

A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
https://microsoft.github.io/FLAML/
MIT License
3.75k stars 495 forks source link

Custom Objective Function for LGBM #1267

Closed iwroteasongforyou closed 5 months ago

iwroteasongforyou commented 5 months ago

I noticed that current FLAML supports customized objective function. But when I create a customized objective function, I saw the following warning.

WARNING trial_runner.py:785 -- Trial Runner checkpointing failed: Can't pickle <function custom_loss_with_weights_flaml at 0x7f73ffa2b700>: it's not the same object as __main__.custom_loss_with_weights_flaml

Can you help to find why?


(train pid=60750) [LightGBM] [Info] Using self-defined objective function
(train pid=60750) [LightGBM] [Info] Using GOSS
== Status ==
Current time: 2024-01-15 18:58:06 (running for 00:00:06.15)
Using FIFO scheduling algorithm.
Current best trial: e9aa5ebc with val_loss=1.0194935966902965 and parameters={'n_estimators': 100, 'num_leaves': 50, 'min_child_samples': 20, 'learning_rate': 0.1, 'log_max_bin': 8, 'colsample_bytree': 1.0, 'reg_alpha': 0.0009765625, 'reg_lambda': 1.0, 'verbose': -1, 'boosting_type': 'goss', 'force_row_wise': True, 'subsample': 1.0, 'top_rate': 0.2, 'other_rate': 0.1, 'learner': 'custom_lgbm'}
Result logdir: ./ray_results/train_2024-01-15_18-57-59
Number of trials: 2/120 (1 PENDING, 1 RUNNING)

(train pid=60750) [LightGBM] [Info] Using self-defined objective function
(train pid=60750) [LightGBM] [Info] Using GOSS
2024-01-15 18:58:11,506 WARNING trial_runner.py:785 -- Trial Runner checkpointing failed: Can't pickle <function custom_loss_with_weights_flaml at 0x7f73ffa2b700>: it's not the same object as __main__.custom_loss_with_weights_flaml

The custom objective function is like this:


def custom_loss_with_weights_flaml(y_true, y_pred, coef):
    c = 0.5
    residual = y_pred - y_true
    grad = c * residual /(np.abs(residual) + c)
    hess = c ** 2 / (np.abs(residual) + c) ** 2
    # rmse grad and hess
    grad_rmse = residual
    hess_rmse = 1.0

    # mae grad and hess
    grad_mae = np.array(residual)
    grad_mae[grad_mae > 0] = 1.
    grad_mae[grad_mae <= 0] = -1.
    hess_mae = 1.0

    return coef * grad + coef * grad_rmse + coef * grad_mae, \
        coef * hess + coef * hess_rmse + coef * hess_mae

class CustomLGBMEstimator(LGBMEstimator):
    def __init__(self, **config):
        super().__init__(objective=custom_loss_with_weights_flaml, **config)