hyperdashio / hyperdash-sdk-py

Official Python SDK for Hyperdash
https://hyperdash.io
199 stars 24 forks source link

support for sklearn cross-validation #117

Open Casyfill opened 6 years ago

Casyfill commented 6 years ago

how should I "weave" hyperdash Experiment object with cross-validation param dictionary/list ?

For now I have this cell, but I'd love to throw "clean" parameters to hyperdash

%%monitor_cell "RF GRIDSEARCH"

tuned_parameters = {'n_estimators': [20, 50, 100], 'criterion': ['gini', 'entropy'],
                     'max_features':['auto', 'sqrt', 0.2, 0.4],
                     'min_samples_leaf': [50,],
                     'bootstrap':[True,],
                     'oob_score':[True,],
                     'n_jobs':[2,],
                     'random_state':[2017],
                     'class_weight':['balanced'],
                     'verbose':[1,]}

clf = GridSearchCV(RandomForestClassifier(), tuned_parameters, cv=5,
                       scoring=f'{score}_macro')
clf.fit(trainX, trainY)
print(clf.best_params_)

for mean, std, params in zip(means, stds, clf.cv_results_['params']):
        print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params))

print("''Detailed classification report:\n
                The model is trained on the full development set.\n
                The scores are computed on the full evaluation set.""")

y_true, y_pred = testY, clf.predict(testX)
print(classification_report(y_true, y_pred))

(resembles example from sklearn readme)

andrewschreiber commented 6 years ago

I’d recommend taking a look at our Experiments API docs (https://github.com/hyperdashio/hyperdash-sdk-py#experiment-instrumentation). It will give you more fine-grained control over the start and end of your experiments.

Casyfill commented 6 years ago

Thanks, but it is not immediately clear how to use Experiment - neither with GridSearchCV, nor with a simple loop. trying to override parameter in experiment raises an exception - how should I use different sets of params within the same experiment?