xadrianzetx / optuna-distributed

Distributed hyperparameter optimization made easy
MIT License
34 stars 1 forks source link

AttributeError: Can't pickle local object '_distributable.<locals>._wrapper' #58

Closed bcollazo closed 1 year ago

bcollazo commented 1 year ago

Describe the bug Can't run the sample code in the README.md. After running it I get:

python optuna-distributed.py
[I 2023-01-14 19:19:16,896] A new study created in memory with name: no-name-50b57926-30b7-4a96-82d8-35a8a0f13503
Traceback (most recent call last):
  File "~/project/optuna-distributed.py", line 21, in <module>
    study.optimize(objective, n_trials=10)
  File "~/project/venv/lib/python3.9/site-packages/optuna_distributed/study.py", line 184, in optimize
    event_loop.run(terminal, timeout, catch)
  File "~/project/venv/lib/python3.9/site-packages/optuna_distributed/eventloop.py", line 61, in run
    self.manager.create_futures(self.study, self.objective)
  File "project/venv/lib/python3.9/site-packages/optuna_distributed/managers/local.py", line 63, in create_futures
    p.start()
  File "~/miniforge3/lib/python3.9/multiprocessing/process.py", line 121, in start
    self._popen = self._Popen(self)
  File "~/miniforge3/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "~/miniforge3/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
    return Popen(process_obj)
  File "~/miniforge3/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
    super().__init__(process_obj)
  File "~/miniforge3/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
    self._launch(process_obj)
  File "~/miniforge3/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
    reduction.dump(process_obj, fp)
  File "~/miniforge3/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_distributable.<locals>._wrapper'

To Reproduce I have a M1 Apple-silicon Mac Mini. Created a Python 3.9 virtual environment. Created a file called optuna-distributed.py and added the code:

import random
import time

import optuna
import optuna_distributed
from dask.distributed import Client

def objective(trial):
    x = trial.suggest_float("x", -100, 100)
    y = trial.suggest_categorical("y", [-1, 0, 1])
    # Some expensive model fit happens here...
    time.sleep(random.uniform(1.0, 2.0))
    return x**2 + y

if __name__ == "__main__":
    # client = Client("<your.cluster.scheduler.address>")  # Enables distributed optimization.
    client = None  # Enables local asynchronous optimization.
    study = optuna_distributed.from_study(optuna.create_study(), client=client)
    study.optimize(objective, n_trials=10)
    print(study.best_value)

Expected behavior To get logs and the study.best_value printed in the console.

xadrianzetx commented 1 year ago

Hi! Thanks for raising this. I've pushed a patch in #60 and enabled multi-platform testing in #59. I don't have a Mac to test the example, so could you install current main branch to confirm that issue has been resolved? After that I'll publish a new release.

xadrianzetx commented 1 year ago

Fix released in v0.3.0.