-
In many situations, you don't have a test set so you would like to use CV for _both_ evaluation and hyper-parameter tuning. Therefore, you need to do nested cross-validation:
``` python
for train, te…
-
According to @ogrisel
Needs investigation.
-
``` julia
julia> (c, v, inds) = cross_validate(
inds -> compute_center(data[:, inds]), # training function
(c, inds) -> compute_rmse(c, data[:, inds]), # evaluation functio…
-
The `grid_search` module now supports a list of grids, and a random-sampled parameter space, and may in the future support other search algorithms. The shared purpose is: tuning (or exploring) hyper-p…
-
Hello
My name is Iván, I'm stuck from several days ago with the problem I'm going to describe. I'm following the Daniel Nouri's tutorial about deep learning: http://danielnouri.org/notes/category/dee…
-
I have make a simple example to check the ridge output weight vectors for polynomial features created from a simple concept and the result is giving wrong coefficients.
The code proposed is simple. X…
-
transform.split.KFold should be able to support any class that provides the same interface as sklearn.cross_validation.KFold. We should be able to both use all cross validation functions built in to s…
-
> > > import numpy as np; import eights as e; from sklearn.ensemble import RandomForestClassifierexp >>> e.operate.simple_clf(np.array([[1,1,1],[1,2,3]]), np.array([1,1,1]),RandomForestClassifier())
>…
-
Hi,
I am running a Random Forest script using sklearn. In version 0.14.1 I run my code and I get ~0.85 accuracy, but after I updated to v0.16.1 I obtained ~0.37 of accuracy.
I didn't alter the code,…
-
Is there a built in way or simple way of cross validating the net while training?
Ie:
Split the dataset to `train-dataset` and `test-dataset` - let's say 70% 30% respectively.
Train the network on the…
agamm updated
9 years ago