microsoft / FLAML

A fast library for AutoML and tuning. Join our Discord: https://discord.gg/Cppx2vSPVP.
https://microsoft.github.io/FLAML/
MIT License
3.92k stars 510 forks source link

Question: when fit the model,"NoneType is Not Callable" #1193

Closed ziyangr closed 1 year ago

ziyangr commented 1 year ago

Hi Thank you so much for developing such a useful package. However, I met some issues:

when I just want to simply try out the model

from flaml import AutoML
from sklearn.datasets import load_iris

# Initialize an AutoML instance
automl = AutoML()
# Specify automl goal and constraint
automl_settings = {
    "time_budget": 1,  # in seconds
    "metric": 'accuracy',
    "task": 'classification',
    "log_file_name": "iris.log",
}
X_train, y_train = load_iris(return_X_y=True)
# Train with labeled input data
automl.fit(X_train=X_train, y_train=y_train,
           **automl_settings)`

it shows that

[flaml.automl.logger: 08-21 14:16:32] {1679} INFO - task = classification
[flaml.automl.logger: 08-21 14:16:32] {1690} INFO - Evaluation method: cv
[flaml.automl.logger: 08-21 14:16:32] {1788} INFO - Minimizing error metric: 1-accuracy
[flaml.automl.logger: 08-21 14:16:32] {1900} INFO - List of ML learners in AutoML Run: ['lgbm', 'rf', 'catboost', 'xgboost', 'extra_tree', 'xgb_limitdepth', 'lrl1']
[flaml.automl.logger: 08-21 14:16:32] {2218} INFO - iteration 0, current learner lgbm
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?e1ef4926-a548-4b35-a44c-28ce957eef2f)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[97], line 15
     13 X_train, y_train = load_iris(return_X_y=True)
     14 # Train with labeled input data
---> 15 automl.fit(X_train=X_train, y_train=y_train,
     16            **automl_settings)
     17 # Predict
     18 print(automl.predict_proba(X_train))

File c:\Users\r00839897\AppData\Local\Programs\Python\Python310\lib\site-packages\flaml\automl\automl.py:1925, in AutoML.fit(self, X_train, y_train, dataframe, label, metric, task, n_jobs, log_file_name, estimator_list, time_budget, max_iter, sample, ensemble, eval_method, log_type, model_history, split_ratio, n_splits, log_training_metric, mem_thres, pred_time_limit, train_time_limit, X_val, y_val, sample_weight_val, groups_val, groups, verbose, retrain_full, split_type, learner_selector, hpo_method, starting_points, seed, n_concurrent_trials, keep_search_state, preserve_checkpoint, early_stop, force_cancel, append_log, auto_augment, min_sample_size, use_ray, use_spark, free_mem_ratio, metric_constraints, custom_hp, time_col, cv_score_agg_func, skip_transform, mlflow_logging, fit_kwargs_by_estimator, **fit_kwargs)
   1923     with training_log_writer(log_file_name, append_log) as save_helper:
   1924         self._training_log = save_helper
-> 1925         self._search()
   1926 else:
   1927     self._training_log = None

File c:\Users\r00839897\AppData\Local\Programs\Python\Python310\lib\site-packages\flaml\automl\automl.py:2482, in AutoML._search(self)
   2480     state.best_config = state.init_config[0] if state.init_config else {}
   2481 elif self._use_ray is False and self._use_spark is False:
-> 2482     self._search_sequential()
   2483 else:
   2484     self._search_parallel()

File c:\Users\r00839897\AppData\Local\Programs\Python\Python310\lib\site-packages\flaml\automl\automl.py:2318, in AutoML._search_sequential(self)
...
    213 if logger.level == logging.DEBUG:
    214     # xgboost 1.6 doesn't display all the params in the model str
    215     logger.debug(f"flaml.model - {model} fit started with params {self.params}")

TypeError: 'NoneType' object is not callable`

Same problem occured when I try

settings = {
    "time_budget" : 50,
    "metric": 'accuracy',
    "task":'classificaion',
    "estimator_list": ['rf', 'catboost', 'xgboost', 'extra_tree', 'xgb_limitdepth', 'lrl1']
}
from flaml import AutoML
automl = AutoML()
automl.fit(X_train = X_train, y_train = y_train, **settings )

It shows that

[flaml.automl.logger: 08-21 14:34:08] {1679} INFO - task = classificaion
[flaml.automl.logger: 08-21 14:34:08] {1690} INFO - Evaluation method: cv
[flaml.automl.logger: 08-21 14:34:08] {1788} INFO - Minimizing error metric: 1-accuracy
[flaml.automl.logger: 08-21 14:34:08] {1900} INFO - List of ML learners in AutoML Run: ['rf', 'catboost', 'xgboost', 'extra_tree', 'xgb_limitdepth', 'lrl1']
[flaml.automl.logger: 08-21 14:34:08] {2218} INFO - iteration 0, current learner rf
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?564fa824-7ec8-457b-884d-73735abcdab0)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[102], line 1
----> 1 automl.fit(X_train = X_train, y_train = y_train, **settings )

File c:\Users\r00839897\AppData\Local\Programs\Python\Python310\lib\site-packages\flaml\automl\automl.py:1928, in AutoML.fit(self, X_train, y_train, dataframe, label, metric, task, n_jobs, log_file_name, estimator_list, time_budget, max_iter, sample, ensemble, eval_method, log_type, model_history, split_ratio, n_splits, log_training_metric, mem_thres, pred_time_limit, train_time_limit, X_val, y_val, sample_weight_val, groups_val, groups, verbose, retrain_full, split_type, learner_selector, hpo_method, starting_points, seed, n_concurrent_trials, keep_search_state, preserve_checkpoint, early_stop, force_cancel, append_log, auto_augment, min_sample_size, use_ray, use_spark, free_mem_ratio, metric_constraints, custom_hp, time_col, cv_score_agg_func, skip_transform, mlflow_logging, fit_kwargs_by_estimator, **fit_kwargs)
   1926 else:
   1927     self._training_log = None
-> 1928     self._search()
   1929 if self._best_estimator:
   1930     logger.info("fit succeeded")

File c:\Users\r00839897\AppData\Local\Programs\Python\Python310\lib\site-packages\flaml\automl\automl.py:2482, in AutoML._search(self)
   2480     state.best_config = state.init_config[0] if state.init_config else {}
   2481 elif self._use_ray is False and self._use_spark is False:
-> 2482     self._search_sequential()
   2483 else:
   2484     self._search_parallel()

File c:\Users\r00839897\AppData\Local\Programs\Python\Python310\lib\site-packages\flaml\automl\automl.py:2318, in AutoML._search_sequential(self)
   2312         search_state.search_alg.searcher.set_search_properties(
   2313             metric=None,
   2314             mode=None,
   2315             metric_target=self._state.best_loss,
   2316         )
...
--> 665 n = kf.get_n_splits()
    666 rng = np.random.RandomState(2020)
    667 budget_per_train = budget and budget / n

AttributeError: 'NoneType' object has no attribute 'get_n_splits'

Could you please tell me how to fix it?

VslBrs commented 1 year ago

Had the exact same problem this morning, what makes it weird is that I changed nothing(or at least I dont remember making any change) to the code that used to work 2 weeks ago. I would also like a solution to this one.

sonichi commented 1 year ago

Could you run pip install "flaml[automl]" to install the [automl] option and retry?

ziyangr commented 1 year ago

Could you run pip install "flaml[automl]" to install the [automl] option and retry?

It worked! Thank you so much!!

domoritz commented 1 year ago

I ran into the same error. There seems to be a required dependency that's missing in flaml alone, no?

sonichi commented 1 year ago

I ran into the same error. There seems to be a required dependency that's missing in flaml alone, no?

Hey @domoritz ! Nice to see you here :) [automl] is required for the AutoML use case. It's not required for more generic tuning use case, so it was made optional.

domoritz commented 1 year ago

I see, makes sense. Maybe it would be nice to add a check somewhere in the import for automl that checks whether the dependencies are available.

saikot-paul commented 10 months ago

for anyone having this issue, someone already stated that the answer is to pip install "flaml[auto]"

vkhodygo commented 6 months ago

This is still a valid issue, automl is no longer maintained and has been in this state for at least 6 years. It can't also be built with python 3.12

amrungwaew commented 4 months ago

I am also having issues (running Python 3.12.1) when I previously used FLAML without any problems.