Closed timm closed 9 years ago
to make space for that text you could:
but lets see how much space you need. maybe you can squeeze into the end of p10
The main point of Arcuir and Fraser's guideline is to use basic ML techniques to do tuning: partition data into training and test data sets, and if it's small data set, use k-fold cross evaluation to do tuning. and so forth...
here, in our paper, section 3.1, we explained how to split data sets and do tuning. Also, those basic ML techniques are straightforward for folks in defect prediction filed using data mining tech, nothing new and they all should know those stuff.
if we do need to add some guidelines, I should come up with something new.
so what is your call on why we use historical versioning rather than k-fold?
K fold cross validation mixes up older and newer data, data from the future may be used to test on past data for tuning. and also we're not sure how much improvement k-fold will bring compared with our current version. but one shortcoming is need more time
added , need to revise
i will reivse
like section 5 of http://www.evosuite.org/wp-content/papercite-data/pdf/ssbse11_tuning.pdf
http://dl.acm.org/citation.cfm?id=2042252