daxiongshu / Grasp-and-Lift

4 stars 2 forks source link

Per subject model #5

Open daxiongshu opened 9 years ago

daxiongshu commented 9 years ago

Hi, do you think it is a good idea that we starts improving on each subject? per_subject_auc

For example, we start from subject5, handstart?

SudalaiRajkumar commented 9 years ago

Hey Carl,

I just noticed that the competition admin is fine with "Per subject model".

Yes. Then we can go for it. We need to make sure that the probability calibrations are similar across different subjects since the eval metric is global auc.

daxiongshu commented 9 years ago

Hi I just found the per subject model works very well with a per subject per event blending. For each subject, each event, I tried filtering with 4 electrodes respectively, and for each electrode, I trained 6 classifiers as in your best base model. Blending them, which are 12 (subjects) 6 (events) * 4 (electrodes) \ 6 (classifiers) = 1728 models, now I get a 0.934 cv base model without stacking. And your previous best base model vali1_3_new_cv.csv has 0.925 cv.

Now I am generating a submission file for this base model. I think we can simply blends more electrodes and we are good to go!

SudalaiRajkumar commented 9 years ago

1728 models!! That is really awesome.. Great job. Please put the codes in git when you are free. I will learn from them.

Which four electrodes did you use??

On Wednesday, August 26, 2015, carl notifications@github.com wrote:

Hi I just found the per subject model works very well with a per subject per event blending. For each subject, each event, I tried filtering 4 electrods, and for electrode, I trained 6 classifiers as in your best base model. Blending them, which are 12 (subjects) 6 (events) \ 4 (electrodes)

  • 6 (classifiers) = 1728 models, now I get a 0.934 cv base model without stacking. And your previous best base model vali1_3_new_cv.csv has 0.925 cv.

Now I am generating a submission file for this base model. I think we can simply blends more electrodes and we are good to go!

— Reply to this email directly or view it on GitHub https://github.com/daxiongshu/Grasp-and-Lift/issues/5#issuecomment-135118726 .

daxiongshu commented 9 years ago

Sure I have pushed the cv code to the bag folder. I will push the sub code if the result is good.

Actually this 1728 models is overstating. Your previous approach also use 6 different classifiers to train for each subject, each event with a bunch of electrodes, let's say 4 electrodes. So the previous approach literally also trained 1728 models. The difference is that the previous approach, vali1_3_new_cv.py, just took the average of these models. What I did is to grid search the weights for each subject and each event. Specifically, for each subject, each event pair (we have 126= 72 pairs) we have 4 (electrodes) \ 6 (classifiers) = 24 models, per pair. I grid searched the weights to blend this 24 models for each pair. In the end, we have 72 different set of weights, where each set has 24 elements.

I arbitrarily chose electrode [i-1 for i in [1,3,25,32]]. I will test other electrodes soon.

daxiongshu commented 9 years ago

The drawback of this approach is that it requires a lot of disk space, 40 GB per electrode for all the subjects... I am trying to make space in my 2 TB computer... But I think we can manage it. : )

daxiongshu commented 9 years ago

Actually it is Ok, we can delete the data after we finish all base models for all events of that (electrode,subject). So I guess maybe 10 GB disk space is enough for all!

SudalaiRajkumar commented 9 years ago

hmm.. that's nice.. I am trying few things but nothing is working out. Probably we can combine this with our old best model and see how the average of two performs.

SudalaiRajkumar commented 9 years ago

I couldn't get any other ideas working and get a better cv score :( I will try and let you know if something works out.

Could you please let me know our best legitimate score so far? Of the ones I stacked, 0.96559 (stack1_3_30_nn3_new_sub.7z) is by far the best score.

You are trying out a lot of new things I know. Thanks for all the hard work :)

daxiongshu commented 9 years ago

hi, our best legal is ave8.csv, 0.96789

And I figure I out why stacking is bad with my new bagged model. I overfit the validation data by using the labels in both bagging the base models and stacking the bagging. So I will just use one series for bagging and another for stacking.

Don't worry about it. We are not eeg master after all. After this I can focus on coupon and I think we can do much better for that one!

SudalaiRajkumar commented 9 years ago

Yeah true we need to have some understanding of concepts as well it seems. :(

Once EEG is done, let me take a break for two three days and after that I will concentrate only on coupon. I am not planning to do any other competition as of now. Let us plan to win a prize there (hopefully first). I will give my best.

Thanks, Sudalai

On Friday, August 28, 2015, carl notifications@github.com wrote:

hi, our best legal is ave8.csv, 0.96789

And I figure I out why stacking is bad with my new bagged model. I overfit the validation data by using the labels in both bagging the base models and stacking the bagging. So I will just use one series for bagging and another for stacking.

Don't worry about it. We are not eeg master after all. After this I can focus on coupon and I think we can do much better for that one!

— Reply to this email directly or view it on GitHub https://github.com/daxiongshu/Grasp-and-Lift/issues/5#issuecomment-135785702 .

daxiongshu commented 9 years ago

Hey, I sincerely suggest that you take the break from now. Since all we left are brute force bagging, let my machine worry about it. I have finished a run of using 33 electrodes separately for each subject and each event, seems good now. And I think I will tune some bad subject a bit and we are done :-D

But still I am positive that we will get a top 10 with an honest score. Indeed, take some rest my friend.

SudalaiRajkumar commented 9 years ago

Thanks a lot Carl :)

Please let me know in case if I can help you some way on this. Thank you.

Thanks, Sudalai

On Friday, August 28, 2015, carl notifications@github.com wrote:

Hey, I sincerely suggest that you take the break from now. Since all we left are brute force bagging, let my machine worry about it. I have finished a run of using 33 electrodes separately for each subject and each event, seems good now. And I think I will tune some bad subject a bit and we are done :-D

But still I am positive that we will get a top 10 with an honest score. Indeed, take some rest my friend.

— Reply to this email directly or view it on GitHub https://github.com/daxiongshu/Grasp-and-Lift/issues/5#issuecomment-135798966 .