ai-se / Caret

compare Caret with DE
0 stars 1 forks source link

while waiting for canada... #20

Open timm opened 8 years ago

timm commented 8 years ago

you could go back to the rig that you know (the one we used for the journal submission) to check if

  1. try your clustering idea
    • clustering the training leads to different tunings for different clusterings
    • if yes then cluster the test data (but only AFTER clustering and tuning training) then use tunings from the nearest training cluster to the test cluster
      • see if that improves the baseline we reported in the journal paper
  2. compare de with grid search on your old rig
    • for multiple tunings, using a coarse grained grid, then finer grain, then a finer grain
    • and as things get finer, grid search gets slower

now you've got some results from (2) from somewhere else? but same or different to my suggestion above?

WeiFoo commented 8 years ago

A:

B:

timm commented 8 years ago

Your decision, of course, but just make sure of one thing

if you improvements via tuning are very small, then the whole experiment suffers from "meh" (i.e. who cares). i think that approach A (above) falls into this camp, right?

Now if you've got a rig where the improvement is large (and you do... you used int for the journal) then that is the rig were it is useful to report the relative value of different tuning methods

up to you

timm commented 8 years ago

progress?

WeiFoo commented 8 years ago

Experiments running. R+python generates a lot of issues. I fixed one by one …..

On Mar 4, 2016, at 5:24 PM, Tim Menzies notifications@github.com wrote:

progress?

— Reply to this email directly or view it on GitHub https://github.com/ai-se/Caret/issues/20#issuecomment-192498193.

WeiFoo commented 8 years ago

if no new errors threw out, I think I can get the results by 22:00 @timm