Dear caifeng, you said that "we perform a 10-fold cross validation to evaluate the classifi- cation accuracy across all experiments. The training, testing, and validating sets are randomly partitioned following propor- tion 8/1/1. The total classification accuracy is calculated as the average of 10-folds cross-validations." in your paper .
What is 10-fold Crossvalidation? I google it, and Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation, the original sample is randomly partitioned into k equal size subsamples. Actually, there is no valid set in 10-fold Crossvalidation.
I am wondering how can you make "The training, testing, and validating sets are randomly partitioned following propor- tion 8/1/1." in 10-fold Crossvalidation? Is there something wrong with your experiments?
Dear caifeng, you said that "we perform a 10-fold cross validation to evaluate the classifi- cation accuracy across all experiments. The training, testing, and validating sets are randomly partitioned following propor- tion 8/1/1. The total classification accuracy is calculated as the average of 10-folds cross-validations." in your paper . What is 10-fold Crossvalidation? I google it, and Cross-validation is a technique to evaluate predictive models by partitioning the original sample into a training set to train the model, and a test set to evaluate it. In k-fold cross-validation, the original sample is randomly partitioned into k equal size subsamples. Actually, there is no valid set in 10-fold Crossvalidation. I am wondering how can you make "The training, testing, and validating sets are randomly partitioned following propor- tion 8/1/1." in 10-fold Crossvalidation? Is there something wrong with your experiments?