sherpa-ai / sherpa

Hyperparameter optimization that enables researchers to experiment, visualize, and scale quickly.
http://parameter-sherpa.readthedocs.io/
GNU General Public License v3.0
333 stars 54 forks source link

Dashboard not supported on Windows. Disable the dashboard and save the finalized study instead.How do I tackle this prblem #78

Closed michaelGK92 closed 4 years ago

LarsHH commented 4 years ago

Hi @michaelGK92 ,

Could you post your code? Then I will modify it accordingly.

Thanks, Lars

michaelGK92 commented 4 years ago

the code is from your wiki.I test it on my python.when I use it in windows, it does not work. the random forest.py `from sklearn.datasets import load_breast_cancer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score import time import sherpa import sherpa.algorithms.bayesian_optimization as bayesian_optimization

parameters = [sherpa.Discrete('n_estimators', [2, 50]), sherpa.Choice('criterion', ['gini', 'entropy']), sherpa.Continuous('max_features', [0.1, 0.9])]

algorithm = bayesian_optimization.GPyOpt(max_concurrent=1, model_type='GP_MCMC', acquisition_type='EI_MCMC', max_num_trials=100)

X, y = load_breast_cancer(return_X_y=True) study = sherpa.Study(parameters=parameters, algorithm=algorithm, lower_is_better=False)

for trial in study: print("Trial ", trial.id, " with parameters ", trial.parameters) clf = RandomForestClassifier(criterion=trial.parameters['criterion'], max_features=trial.parameters['max_features'], n_estimators=trial.parameters['n_estimators'], random_state=0) scores = cross_val_score(clf, X, y, cv=5) print("Score: ", scores.mean()) study.add_observation(trial, iteration=1, objective=scores.mean()) study.finalize(trial) print(study.get_best_result())

LarsHH commented 4 years ago

Hi @michaelGK92 ,

Try this code (inside sherpa.Study I added disable_dashboard=True).

from sklearn.datasets import load_breast_cancer
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
import time
import sherpa
import sherpa.algorithms.bayesian_optimization as bayesian_optimization

parameters = [sherpa.Discrete('n_estimators', [2, 50]),
sherpa.Choice('criterion', ['gini', 'entropy']),
sherpa.Continuous('max_features', [0.1, 0.9])]

algorithm = bayesian_optimization.GPyOpt(max_concurrent=1,
model_type='GP_MCMC',
acquisition_type='EI_MCMC',
max_num_trials=100)

X, y = load_breast_cancer(return_X_y=True)
study = sherpa.Study(parameters=parameters,
algorithm=algorithm,
disable_dashboard=True,
lower_is_better=False)

for trial in study:
    print("Trial ", trial.id, " with parameters ", trial.parameters)
    clf = RandomForestClassifier(criterion=trial.parameters['criterion'],
    max_features=trial.parameters['max_features'],
    n_estimators=trial.parameters['n_estimators'],
    random_state=0)
    scores = cross_val_score(clf, X, y, cv=5)
    print("Score: ", scores.mean())
    study.add_observation(trial, iteration=1, objective=scores.mean())
    study.finalize(trial)
    print(study.get_best_result())

study.save(".")

Then later you can do

import sherpa
sherpa.Study.load_dashboard(".")

from the same directory.

michaelGK92 commented 4 years ago

thank you ,it works发自我的iPhone------------------ Original ------------------From: Lars Hertel notifications@github.comDate: Sat,Nov 16,2019 6:38 AMTo: sherpa-ai/sherpa sherpa@noreply.github.comCc: michaelGK92 carrtyui@126.com, Mention mention@noreply.github.comSubject: Re: [sherpa-ai/sherpa] Dashboard not supported on Windows. Disablethe dashboard and save the finalized study instead.How do I tackle this prblem(#78)Hi @michaelGK92 , Try this code (inside sherpa.Study I added disable_dashboard=True). from sklearn.datasets import load_breast_cancer from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import cross_val_score import time import sherpa import sherpa.algorithms.bayesian_optimization as bayesian_optimization

parameters = [sherpa.Discrete('n_estimators', [2, 50]), sherpa.Choice('criterion', ['gini', 'entropy']), sherpa.Continuous('max_features', [0.1, 0.9])]

algorithm = bayesian_optimization.GPyOpt(max_concurrent=1, model_type='GP_MCMC', acquisition_type='EI_MCMC', max_num_trials=100)

X, y = load_breast_cancer(return_X_y=True) study = sherpa.Study(parameters=parameters, algorithm=algorithm, disable_dashboard=True, lower_is_better=False)

for trial in study: print("Trial ", trial.id, " with parameters ", trial.parameters) clf = RandomForestClassifier(criterion=trial.parameters['criterion'], max_features=trial.parameters['max_features'], n_estimators=trial.parameters['n_estimators'], random_state=0) scores = cross_val_score(clf, X, y, cv=5) print("Score: ", scores.mean()) study.add_observation(trial, iteration=1, objective=scores.mean()) study.finalize(trial) print(study.get_best_result())

—You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub, or unsubscribe. [ { "@context": "http://schema.org", "@type": "EmailMessage", "potentialAction": { "@type": "ViewAction", "target": "https://github.com/sherpa-ai/sherpa/issues/78?email_source=notifications\u0026email_token=AKZDMKBGQ7NFBFBXKT2IERDQT4QFLA5CNFSM4JNKX7G2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEG5ZNA#issuecomment-554556596", "url": "https://github.com/sherpa-ai/sherpa/issues/78?email_source=notifications\u0026email_token=AKZDMKBGQ7NFBFBXKT2IERDQT4QFLA5CNFSM4JNKX7G2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEG5ZNA#issuecomment-554556596", "name": "View Issue" }, "description": "View this Issue on GitHub", "publisher": { "@type": "Organization", "name": "GitHub", "url": "https://github.com" } } ]

AJster commented 2 years ago

I installed ubuntu app on my windows, installed python and copied the file: sudo cp -i /mnt/c/Users/.... /xyz.py

I run the code from ubuntu. python3 xyz.py

my file includes: import tempfile

model_dir = tempfile.mkdtemp() ... study.save(model_dir) print (model_dir)

opening the file e.g.: import sherpa sherpa.Study.load_dashboard("/tmp/tmpja_m1w61")

removing all tmp files: cd /tmp find . -type d -name 'tmp*' -exec rm -r {} \; -prune