KIST-Iceberg / Iceberg

Kaggle Iceberg Challenage
1 stars 0 forks source link

다수 모델 적용 #16

Open kairos03 opened 6 years ago

kairos03 commented 6 years ago
kairos03 commented 6 years ago

LGBM 구현 참고 https://www.kaggle.com/warpri81/diving-for-features

mike2ox commented 6 years ago

image data augment 관련 https://github.com/aleju/imgaug

mike2ox commented 6 years ago

LGBM은 적은 데이터셋에서는 overfitting이 잘 일어난다기에 iceberg에는 안맞음. https://medium.com/@pushkarmandot/https-medium-com-pushkarmandot-what-is-lightgbm-how-to-implement-it-how-to-fine-tune-the-parameters-60347819b7fc

mike2ox commented 6 years ago

Conv2d Layer가 학습이 안되는 이유는 GPU과열로 추정됨.(경험)

mike2ox commented 6 years ago

SVM

Input shape

shape| X_train: (14436, 75, 75, 9), X_test : (4812, 75, 75, 9) shape| Y_train: (14436, 2), Y_test : (4812, 2) shape| angle (19248, 5) [reshape_train] | (14436, 50625) [reshape_test] | (4812, 50625) [add_data_train] | (14436, 50630) [add_data_test] | (4812, 50630)

Result

/usr/local/lib/python3.5/dist-packages/sklearn/svm/base.py:218: ConvergenceWarning: Solver terminated early (max_iter=30). Consider pre-processing your data with StandardScaler or MinMaxScaler. % self.max_iter, ConvergenceWarning) SVC fit complete SVC Result [max_itr : 1.000000 | logloss : 0.69266 | train_acc : 0.53394 | test_acc : 0.52057] [max_itr : 3.000000 | logloss : 0.69084 | train_acc : 0.52951 | test_acc : 0.53429] [max_itr : 10.000000 | logloss : 0.69259 | train_acc : 0.53443 | test_acc : 0.52099] [max_itr : 30.000000 | logloss : 0.69861 | train_acc : 0.53374 ] [max_itr : 60.000000 | logloss : 0.69900 | train_acc : 0.53221 ]

mike2ox commented 6 years ago

Logistic Regression

Input shape

shape| X_train: (14436, 75, 75, 9), X_test : (4812, 75, 75, 9) shape| Y_train: (14436, 2), Y_test : (4812, 2) shape| angle (19248, 5) [reshape_train] | (14436, 50625) [reshape_test] | (4812, 50625) [add_data_train] | (14436, 50630) [add_data_test] | (4812, 50630)

Result

[max_itr : 1.000000 | logloss : 0.64408 | train_acc : 0.77480 | test_acc : 0.64485] [max_itr : 2.000000 | logloss : 0.66920 | train_acc : 0.82405 | test_acc : 0.66999] [max_itr : 3.000000 | logloss : 0.67043 | train_acc : 0.81920 | test_acc : 0.66334] [max_itr : 6.000000 | logloss : 0.68245 | train_acc : 0.85647 | test_acc : 0.66854] [max_itr : 10.000000 | logloss : 0.89548 | train_acc : 0.90898 | test_acc : 0.68994] [max_itr : 30.000000 | logloss : 2.59881 | train_acc : 0.92560 ] [max_itr : 60.000000 | logloss : 1.71673 | train_acc : 0.96149 | test_acc : 0.67332]

mike2ox commented 6 years ago

FC Layer(only Dense)

Layer shape

[reshape] | (?, 50625) [add_data] | (?, 50630) [dense1] | (?, 1000) [dense2] | (?, 200) [dense3] | (?, 2)

Hyper params

Learning Rate 1e-05 Batch Size 50 Dropout Rate 0.7 Random Seed 220

Result

Train Start [8.654] TRAIN EP: 00000 | loss: 1.61250 | acc: 0.53044 [9.775] VALID EP: 00000 | loss: 0.34389 | acc: 0.53406 | logloss: 0.69315 [114.134] TRAIN EP: 00019 | loss: 0.69315 | acc: 0.53052 [115.278] VALID EP: 00019 | loss: 0.34389 | acc: 0.53406 | logloss: 0.69315 [221.734] TRAIN EP: 00039 | loss: 0.69315 | acc: 0.53056 [222.918] VALID EP: 00039 | loss: 0.34389 | acc: 0.53031 | logloss: 0.69315 [329.127] TRAIN EP: 00059 | loss: 0.69315 | acc: 0.53056 [330.331] VALID EP: 00059 | loss: 0.34389 | acc: 0.53406 | logloss: 0.69315 [436.951] TRAIN EP: 00079 | loss: 0.69315 | acc: 0.53057 [438.165] VALID EP: 00079 | loss: 0.34389 | acc: 0.52656 | logloss: 0.69315 [544.588] TRAIN EP: 00099 | loss: 0.69315 | acc: 0.53058 [545.788] VALID EP: 00099 | loss: 0.34389 | acc: 0.52656 | logloss: 0.69315

mike2ox commented 6 years ago

Conclusion