lccasagrande / Deep-Knowledge-Tracing

An implementation of the Deep Knowledge Tracing (DKT) using Tensorflow 2.0
MIT License
94 stars 38 forks source link

AUC test #1

Closed dahouanesrine closed 5 years ago

dahouanesrine commented 6 years ago

hi, i tried to execute your code but i didn't get the same result AUC test 0.85 i get `======== Data Summary ======== Data size: 4163 Training data size: 2665 Validation data size: 666 Testing data size: 832 Number of skills: 123

C:\Users\nesri\Anaconda3\envs\deeptens\lib\site-packages\h5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type. from ._conv import register_converters as _register_converters Using TensorFlow backend. ==== Training Started ==== Epoch 1/10 1/1 [==============================] - 27s 27s/step - loss: 0.6924 - val_loss: 0.6720

Epoch 00001: val_loss improved from inf to 0.67200, saving model to saved_models/ASSISTments.best.model.weights.hdf5 Epoch 2/10 1/1 [==============================] - 17s 17s/step - loss: 0.6274 - val_loss: 0.7233 - val_auc: 0.5728 - val_acc: 0.6659 - val_pre: 0.8103

Epoch 00002: val_loss did not improve from 0.67200 Epoch 3/10 1/1 [==============================] - 12s 12s/step - loss: 0.6143 - val_loss: 0.7112 - val_auc: 0.5124 - val_acc: 0.5180 - val_pre: 0.7895

Epoch 00003: val_loss did not improve from 0.67200 Epoch 4/10 1/1 [==============================] - 12s 12s/step - loss: 0.8007 - val_loss: 0.7096 - val_auc: 0.4878 - val_acc: 0.5618 - val_pre: 0.7132

Epoch 00004: val_loss did not improve from 0.67200 Epoch 5/10 1/1 [==============================] - 12s 12s/step - loss: 0.6123 - val_loss: 0.7175 - val_auc: 0.5361 - val_acc: 0.5119 - val_pre: 0.7680

Epoch 00005: val_loss did not improve from 0.67200 Epoch 6/10 1/1 [==============================] - 15s 15s/step - loss: 0.5901 - val_loss: 0.7050 - val_auc: 0.5799 - val_acc: 0.5316 - val_pre: 0.7890

Epoch 00006: val_loss did not improve from 0.67200 Epoch 7/10 1/1 [==============================] - 21s 21s/step - loss: 0.5678 - val_loss: 0.6871 - val_auc: 0.5698 - val_acc: 0.6002 - val_pre: 0.8045

Epoch 00007: val_loss did not improve from 0.67200 Epoch 8/10 1/1 [==============================] - 18s 18s/step - loss: 0.5614 - val_loss: 0.6999 - val_auc: 0.5930 - val_acc: 0.5912 - val_pre: 0.8069

Epoch 00008: val_loss did not improve from 0.67200 Epoch 9/10 1/1 [==============================] - 15s 15s/step - loss: 0.5558 - val_loss: 0.6991 - val_auc: 0.5879 - val_acc: 0.5212 - val_pre: 0.7704

Epoch 00009: val_loss did not improve from 0.67200 Epoch 10/10 1/1 [==============================] - 15s 15s/step - loss: 0.5477 - val_loss: 0.6910 - val_auc: 0.5659 - val_acc: 0.5392 - val_pre: 0.7811

Epoch 00010: val_loss did not improve from 0.67200 ==== Training Done ==== ==== Evaluation Started ==== 1/1 [==============================] - 6s 6s/step - auc: 0.6729 - acc: 0.7172 - pre: 0.9235 ==== Evaluation Done ==== `

i used optimizer = "adagrad" lstm_units = 250 batch_size = 20 epochs = 10 dropout_rate = 0.6 verbose = 1 validation_rate = 0.2 # Portion of training data to be used for validation testing_rate = 0.2 # Portion of data to be used for testing and also i tried all parameters that you put but i didn't get same result why ?

lccasagrande commented 6 years ago

Did you manage to solve this?

If not, did you remove the limitation I've put on the dataset on "Part 3: Building the model" ?

I've limited the dataset to the first 10 rows to be able to test it faster. If you want to replicate my results you will have to use the entire dataset.