-
Hi Daniel
If I understand correctly, the train validation split is done once for the whole training. Only using indices from the first fold. Once the indices are stored, for each epoch, all the batch…
-
Here is the top of a [nose-timer](https://pypi.python.org/pypi/nose-timer) run on the current master:
```
sklearn.ensemble.tests.test_weight_boosting.test_sparse_classification: 6.2676s
sklearn.exter…
-
This object needs and **iter** method and .n_folds and .shuffle attributes. Essentially the same structure as a sklearn.cross_validation.KFold object.
-
I reinstalled my sklearn to the latest 0.15 and re-ran the same code with the same input on same setting (k-fold splitting, random seed) using SVC, but the classification results are quite different (…
-
Dose anybody know what is a negative cross validation accuracy mean in linear regression model? We are fitting our data to sklearn linear regression model and get a negative accuracy which really make…
-
The current implementation of semi-supervised classifier in label_propagation.py assumes that label for unlabeled data is -1 and they are mixed together with labeled data.
If Cross-Validation is use…
-
Great work with MLBase, proves very helpful!
Are there any thoughts on implementing stratified sampling in the future?
svs14 updated
10 years ago
-
The two following snippets should be the same, but somehow cross-validation (over a linear regression model) is giving negative values from MSE (note, the absolute value of the negative values is corr…
-
To my knowledge, sklearn does not currently support rigorous cross-validation of time-dependent problems. All out-of-the-box cross-validation routines will construct training folds that include fut…
-
Is there a rationale for having the default behavior of KFold be to return folds without random shuffling? This kind of threw me off...
See: https://github.com/scikit-learn/scikit-learn/blob/master/s…
wgyn updated
10 years ago