-
First off, thanks for building this - this is awesome and it'll definitely help my workflow.
Currently, for cross-validation, I can only pass in the number of folds that I want to repeat my `cv` fo…
-
Running crossValidate.py with the latest version of sklearn gave this error:
> Traceback (most recent call last):
> File "crossValidate.py", line 29, in
> main()
> File "crossValidate.py", lin…
-
Hi there
Wonderful idea that you have implemented. I was wondering if it would be possible to perform K-Fold cross validation?
Thanks
-
**Describe the issue**:
`KFold.split` doesn't support dask dataframes. With the recent integrations of dask in e.g., xgboost, optuna, it would be very useful if it did. The error message acknowle…
-
In the previous version of turicreate (graphlab-create-2.1) were a [cross validation module](https://turi.com/products/create/docs/graphlab.toolkits.cross_validation.html) that included cross_validati…
-
from sklearn.**cross_validation** import KFold
↓
from sklearn.**model_selection** import KFold
-
Currently, we hardcode model.cross_validation.CrossValidationScore to use sklearn.cross_validation.KFold. We should be able to wrap around arbitrary sklearn cross-validators
-
**Is your feature request related to a problem? Please describe.**
SKLearn has easy KFold functionality to create folds across a dataset: https://scikit-learn.org/stable/modules/generated/sklearn.mod…
-
When I run your file, I am getting the error "For Kfold cross-validation, the third input must be a positive integer." I tried in all ways but the error remains the same. Can you please help?
-
Current SER evaluation only split into five cross-validation using KFold. It is necessary to split based on session, use each session for test, and report average score (UA, WA) for benchmarking.