guillaume-chevalier / LSTM-Human-Activity-Recognition

Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
MIT License
3.33k stars 935 forks source link

Different Performance Using the Current Version of lstm.py with TensorFlow r1.0 #8

Closed zhaowenyi94 closed 7 years ago

zhaowenyi94 commented 7 years ago

To fit the current code in to the new released TensorFlow r1.0, I made several modification on the code

In the Loading Function

#line 25:
file = open(signal_type_path, 'rb')    ===>>>     file = open(signal_type_path, 'r')

#line 40:
file = open(y_path, 'rb')     ===>>>    file = open(y_path, 'r')

In the LSTM_NETWORK() Function

#line 110:     
hidden = tf.split(0, config.n_steps, hidden)     ===>>>    hidden = tf.split(hidden, config.n_steps, 0)

#line 114    
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(config.n_hidden, forget_bias=1.0)    ===>>>    lstm_cell = tf.contrib.rnn.BasicLSTMCell(config.n_hidden, forget_bias=1.0)

#line 117 
lsmt_layers = tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * 2)    ===>>>    lsmt_layers = tf.contrib.rnn.MultiRNNCell([lstm_cell] * 2)

#line 120
outputs, _ = tf.nn.rnn(lsmt_layers, hidden, dtype=tf.float32)    ===>>>    outputs, _ = tf.contrib.rnn.static_rnn(lsmt_layers, hidden, dtype=tf.float32)

In main()

#line 216         
tf.nn.softmax_cross_entropy_with_logits(pred_Y, Y)) + l2    ===>>>    tf.nn.softmax_cross_entropy_with_logits(labels=pred_Y,logits= Y)) + l2

#line 228
tf.initialize_all_variables().run()    ===>>>    tf.global_variables_initializer().run()

However, I ran the code and the performance does not seem good as it is shown in the readme file, I want to know whether the modification of the code have some mistakes. The results shown as below:

traing iter: 0, test accuracy : 0.34781134128570557, loss : 1.3058252334594727
traing iter: 1, test accuracy : 0.3338988721370697, loss : 1.5186371803283691
traing iter: 2, test accuracy : 0.287750244140625, loss : 1.7945531606674194
traing iter: 3, test accuracy : 0.2789277136325836, loss : 2.190826416015625
traing iter: 4, test accuracy : 0.36274176836013794, loss : 2.607555866241455
traing iter: 5, test accuracy : 0.3366135060787201, loss : 2.898186206817627
traing iter: 6, test accuracy : 0.235154390335083, loss : 3.007314443588257
traing iter: 7, test accuracy : 0.18154054880142212, loss : 3.0111827850341797
traing iter: 8, test accuracy : 0.18052256107330322, loss : 2.9800398349761963
traing iter: 9, test accuracy : 0.18052256107330322, loss : 2.953343391418457
traing iter: 10, test accuracy : 0.18052256107330322, loss : 2.934436559677124
traing iter: 11, test accuracy : 0.18052256107330322, loss : 2.927518844604492
traing iter: 12, test accuracy : 0.18052256107330322, loss : 2.9316229820251465
traing iter: 13, test accuracy : 0.18052256107330322, loss : 2.935426712036133
traing iter: 14, test accuracy : 0.18052256107330322, loss : 2.9258742332458496
traing iter: 15, test accuracy : 0.18052256107330322, loss : 2.9044976234436035
traing iter: 16, test accuracy : 0.18052256107330322, loss : 2.878373622894287
traing iter: 17, test accuracy : 0.18052256107330322, loss : 2.850264310836792
traing iter: 18, test accuracy : 0.18052256107330322, loss : 2.820138454437256
traing iter: 19, test accuracy : 0.18052256107330322, loss : 2.787750244140625
traing iter: 20, test accuracy : 0.18052256107330322, loss : 2.753265380859375
traing iter: 21, test accuracy : 0.18052256107330322, loss : 2.717087507247925
traing iter: 22, test accuracy : 0.18052256107330322, loss : 2.6796491146087646
traing iter: 23, test accuracy : 0.18052256107330322, loss : 2.6416709423065186
traing iter: 24, test accuracy : 0.18052256107330322, loss : 2.6035842895507812
traing iter: 25, test accuracy : 0.18052256107330322, loss : 2.5656495094299316
traing iter: 26, test accuracy : 0.18052256107330322, loss : 2.5279884338378906
traing iter: 27, test accuracy : 0.18052256107330322, loss : 2.4905736446380615
traing iter: 28, test accuracy : 0.18052256107330322, loss : 2.453395366668701
traing iter: 29, test accuracy : 0.18052256107330322, loss : 2.416445732116699
traing iter: 30, test accuracy : 0.18052256107330322, loss : 2.3797318935394287
traing iter: 31, test accuracy : 0.18052256107330322, loss : 2.3432376384735107
traing iter: 32, test accuracy : 0.18052256107330322, loss : 2.3069679737091064
traing iter: 33, test accuracy : 0.18052256107330322, loss : 2.27091646194458
traing iter: 34, test accuracy : 0.18052256107330322, loss : 2.235081911087036
traing iter: 35, test accuracy : 0.18052256107330322, loss : 2.1994683742523193
traing iter: 36, test accuracy : 0.18052256107330322, loss : 2.164074182510376
traing iter: 37, test accuracy : 0.18052256107330322, loss : 2.1289024353027344
traing iter: 38, test accuracy : 0.18052256107330322, loss : 2.0939483642578125
traing iter: 39, test accuracy : 0.18052256107330322, loss : 2.059211492538452
traing iter: 40, test accuracy : 0.18052256107330322, loss : 2.0247159004211426
traing iter: 41, test accuracy : 0.18052256107330322, loss : 1.9904437065124512
traing iter: 42, test accuracy : 0.18052256107330322, loss : 1.9563994407653809
traing iter: 43, test accuracy : 0.18052256107330322, loss : 1.9225943088531494
traing iter: 44, test accuracy : 0.18052256107330322, loss : 1.889019250869751
traing iter: 45, test accuracy : 0.18052256107330322, loss : 1.8556859493255615
traing iter: 46, test accuracy : 0.18052256107330322, loss : 1.8225984573364258
traing iter: 47, test accuracy : 0.18052256107330322, loss : 1.7897469997406006
traing iter: 48, test accuracy : 0.18052256107330322, loss : 1.757143259048462
traing iter: 49, test accuracy : 0.18052256107330322, loss : 1.7247881889343262
traing iter: 50, test accuracy : 0.18052256107330322, loss : 1.6926804780960083
traing iter: 51, test accuracy : 0.18052256107330322, loss : 1.6608327627182007
traing iter: 52, test accuracy : 0.18052256107330322, loss : 1.6292425394058228
traing iter: 53, test accuracy : 0.18052256107330322, loss : 1.5979050397872925
traing iter: 54, test accuracy : 0.18052256107330322, loss : 1.566849946975708
traing iter: 55, test accuracy : 0.18052256107330322, loss : 1.536041498184204
traing iter: 56, test accuracy : 0.18052256107330322, loss : 1.5055114030838013
traing iter: 57, test accuracy : 0.18052256107330322, loss : 1.4752501249313354
traing iter: 58, test accuracy : 0.18052256107330322, loss : 1.4452615976333618
traing iter: 59, test accuracy : 0.18052256107330322, loss : 1.4155560731887817
traing iter: 60, test accuracy : 0.18052256107330322, loss : 1.386133074760437
traing iter: 61, test accuracy : 0.18052256107330322, loss : 1.3569962978363037
traing iter: 62, test accuracy : 0.18052256107330322, loss : 1.3281437158584595
traing iter: 63, test accuracy : 0.18052256107330322, loss : 1.299586534500122
traing iter: 64, test accuracy : 0.18052256107330322, loss : 1.2713215351104736
traing iter: 65, test accuracy : 0.18052256107330322, loss : 1.2433592081069946
traing iter: 66, test accuracy : 0.18052256107330322, loss : 1.2156893014907837
traing iter: 67, test accuracy : 0.18052256107330322, loss : 1.1883275508880615
traing iter: 68, test accuracy : 0.18052256107330322, loss : 1.1612651348114014
traing iter: 69, test accuracy : 0.18052256107330322, loss : 1.134517788887024
traing iter: 70, test accuracy : 0.18052256107330322, loss : 1.108081340789795
traing iter: 71, test accuracy : 0.18052256107330322, loss : 1.0819562673568726
traing iter: 72, test accuracy : 0.18052256107330322, loss : 1.0561437606811523
traing iter: 73, test accuracy : 0.18052256107330322, loss : 1.030653953552246
traing iter: 74, test accuracy : 0.18052256107330322, loss : 1.0054810047149658
traing iter: 75, test accuracy : 0.18052256107330322, loss : 0.9806308746337891
traing iter: 76, test accuracy : 0.18052256107330322, loss : 0.9561023712158203
traing iter: 77, test accuracy : 0.18052256107330322, loss : 0.9319024085998535
traing iter: 78, test accuracy : 0.18052256107330322, loss : 0.9080308079719543
traing iter: 79, test accuracy : 0.18052256107330322, loss : 0.8844877481460571
traing iter: 80, test accuracy : 0.18052256107330322, loss : 0.8612725138664246
traing iter: 81, test accuracy : 0.18052256107330322, loss : 0.8383844494819641
traing iter: 82, test accuracy : 0.18052256107330322, loss : 0.8158326148986816
traing iter: 83, test accuracy : 0.18052256107330322, loss : 0.7936134934425354
traing iter: 84, test accuracy : 0.18052256107330322, loss : 0.7717282772064209
traing iter: 85, test accuracy : 0.18052256107330322, loss : 0.750174880027771
traing iter: 86, test accuracy : 0.18052256107330322, loss : 0.7289565801620483
traing iter: 87, test accuracy : 0.18052256107330322, loss : 0.7080764770507812
traing iter: 88, test accuracy : 0.18052256107330322, loss : 0.6875315308570862
traing iter: 89, test accuracy : 0.18052256107330322, loss : 0.667317271232605
traing iter: 90, test accuracy : 0.18052256107330322, loss : 0.6474432945251465
traing iter: 91, test accuracy : 0.18052256107330322, loss : 0.6279003024101257
traing iter: 92, test accuracy : 0.18052256107330322, loss : 0.6086910367012024
traing iter: 93, test accuracy : 0.18052256107330322, loss : 0.5898177623748779
traing iter: 94, test accuracy : 0.18052256107330322, loss : 0.5712740421295166
traing iter: 95, test accuracy : 0.18052256107330322, loss : 0.5530636310577393
traing iter: 96, test accuracy : 0.18052256107330322, loss : 0.5351837277412415
traing iter: 97, test accuracy : 0.18052256107330322, loss : 0.517633318901062
traing iter: 98, test accuracy : 0.18052256107330322, loss : 0.5004111528396606
traing iter: 99, test accuracy : 0.18052256107330322, loss : 0.48351573944091797
traing iter: 100, test accuracy : 0.18052256107330322, loss : 0.46694350242614746
traing iter: 101, test accuracy : 0.18052256107330322, loss : 0.45069605112075806
traing iter: 102, test accuracy : 0.18052256107330322, loss : 0.4347696900367737
traing iter: 103, test accuracy : 0.18052256107330322, loss : 0.4191637635231018
traing iter: 104, test accuracy : 0.18052256107330322, loss : 0.403874009847641
traing iter: 105, test accuracy : 0.18052256107330322, loss : 0.3889009356498718
traing iter: 106, test accuracy : 0.18052256107330322, loss : 0.37423935532569885
traing iter: 107, test accuracy : 0.18052256107330322, loss : 0.35988837480545044
traing iter: 108, test accuracy : 0.18052256107330322, loss : 0.34584617614746094
traing iter: 109, test accuracy : 0.18052256107330322, loss : 0.33210957050323486
traing iter: 110, test accuracy : 0.18052256107330322, loss : 0.31867480278015137
traing iter: 111, test accuracy : 0.18052256107330322, loss : 0.3055408000946045
traing iter: 112, test accuracy : 0.18052256107330322, loss : 0.2927030920982361
traing iter: 113, test accuracy : 0.18052256107330322, loss : 0.28015977144241333
traing iter: 114, test accuracy : 0.18052256107330322, loss : 0.26790836453437805
traing iter: 115, test accuracy : 0.18052256107330322, loss : 0.2559434473514557
traing iter: 116, test accuracy : 0.18052256107330322, loss : 0.24426409602165222
traing iter: 117, test accuracy : 0.18052256107330322, loss : 0.2328660935163498
traing iter: 118, test accuracy : 0.18052256107330322, loss : 0.2217465490102768
traing iter: 119, test accuracy : 0.18052256107330322, loss : 0.21090266108512878
traing iter: 120, test accuracy : 0.18052256107330322, loss : 0.20032905042171478
traing iter: 121, test accuracy : 0.18052256107330322, loss : 0.1900242269039154
traing iter: 122, test accuracy : 0.18052256107330322, loss : 0.17998453974723816
traing iter: 123, test accuracy : 0.18052256107330322, loss : 0.17020505666732788
traing iter: 124, test accuracy : 0.18052256107330322, loss : 0.16068293154239655
traing iter: 125, test accuracy : 0.18052256107330322, loss : 0.15141479671001434
traing iter: 126, test accuracy : 0.18052256107330322, loss : 0.14239707589149475
traing iter: 127, test accuracy : 0.18052256107330322, loss : 0.13362593948841095
traing iter: 128, test accuracy : 0.18052256107330322, loss : 0.12509757280349731
traing iter: 129, test accuracy : 0.18052256107330322, loss : 0.11680810153484344
traing iter: 130, test accuracy : 0.18052256107330322, loss : 0.10875467956066132
traing iter: 131, test accuracy : 0.18052256107330322, loss : 0.10093227028846741
traing iter: 132, test accuracy : 0.18052256107330322, loss : 0.09333805739879608
traing iter: 133, test accuracy : 0.18052256107330322, loss : 0.08596782386302948
traing iter: 134, test accuracy : 0.18052256107330322, loss : 0.07881791889667511
traing iter: 135, test accuracy : 0.18052256107330322, loss : 0.07188472896814346
traing iter: 136, test accuracy : 0.18052256107330322, loss : 0.06516419351100922
traing iter: 137, test accuracy : 0.18052256107330322, loss : 0.058652739971876144
traing iter: 138, test accuracy : 0.18052256107330322, loss : 0.05234657600522041
traing iter: 139, test accuracy : 0.18052256107330322, loss : 0.04624189808964729
traing iter: 140, test accuracy : 0.18052256107330322, loss : 0.04033491760492325
traing iter: 141, test accuracy : 0.18052256107330322, loss : 0.034621983766555786
traing iter: 142, test accuracy : 0.18052256107330322, loss : 0.029099291190505028
traing iter: 143, test accuracy : 0.18052256107330322, loss : 0.023763025179505348
traing iter: 144, test accuracy : 0.18052256107330322, loss : 0.018609726801514626
traing iter: 145, test accuracy : 0.18052256107330322, loss : 0.013635683804750443
traing iter: 146, test accuracy : 0.18052256107330322, loss : 0.0088372603058815
traing iter: 147, test accuracy : 0.18052256107330322, loss : 0.004210382699966431
traing iter: 148, test accuracy : 0.18052256107330322, loss : -0.0002478770911693573
traing iter: 149, test accuracy : 0.18052256107330322, loss : -0.004541546106338501
traing iter: 150, test accuracy : 0.18052256107330322, loss : -0.008673999458551407
traing iter: 151, test accuracy : 0.18052256107330322, loss : -0.012648768723011017
traing iter: 152, test accuracy : 0.18052256107330322, loss : -0.01646951586008072
traing iter: 153, test accuracy : 0.18052256107330322, loss : -0.020139258354902267
traing iter: 154, test accuracy : 0.18052256107330322, loss : -0.02366192266345024
traing iter: 155, test accuracy : 0.18052256107330322, loss : -0.027040652930736542
traing iter: 156, test accuracy : 0.18052256107330322, loss : -0.03027883544564247
traing iter: 157, test accuracy : 0.18052256107330322, loss : -0.03337998315691948
traing iter: 158, test accuracy : 0.18052256107330322, loss : -0.036346666514873505
traing iter: 159, test accuracy : 0.18052256107330322, loss : -0.03918309509754181
traing iter: 160, test accuracy : 0.18052256107330322, loss : -0.04189173877239227
traing iter: 161, test accuracy : 0.18052256107330322, loss : -0.04447639361023903
traing iter: 162, test accuracy : 0.18052256107330322, loss : -0.04693935066461563
traing iter: 163, test accuracy : 0.18052256107330322, loss : -0.049284275621175766
traing iter: 164, test accuracy : 0.18052256107330322, loss : -0.051514316350221634
traing iter: 165, test accuracy : 0.18052256107330322, loss : -0.05363213270902634
traing iter: 166, test accuracy : 0.18052256107330322, loss : -0.055640846490859985
traing iter: 167, test accuracy : 0.18052256107330322, loss : -0.05754372850060463
traing iter: 168, test accuracy : 0.18052256107330322, loss : -0.059342704713344574
traing iter: 169, test accuracy : 0.18052256107330322, loss : -0.0610412135720253
traing iter: 170, test accuracy : 0.18052256107330322, loss : -0.06264205276966095
traing iter: 171, test accuracy : 0.18052256107330322, loss : -0.06414808332920074
traing iter: 172, test accuracy : 0.18052256107330322, loss : -0.06556138396263123
traing iter: 173, test accuracy : 0.18052256107330322, loss : -0.06688489019870758
traing iter: 174, test accuracy : 0.18052256107330322, loss : -0.06812205910682678
traing iter: 175, test accuracy : 0.18052256107330322, loss : -0.06927430629730225
traing iter: 176, test accuracy : 0.18052256107330322, loss : -0.07034479826688766
traing iter: 177, test accuracy : 0.18052256107330322, loss : -0.07133537530899048
traing iter: 178, test accuracy : 0.18052256107330322, loss : -0.07224904000759125
traing iter: 179, test accuracy : 0.18052256107330322, loss : -0.07308772951364517
traing iter: 180, test accuracy : 0.18052256107330322, loss : -0.07385437935590744
traing iter: 181, test accuracy : 0.18052256107330322, loss : -0.07455061376094818
traing iter: 182, test accuracy : 0.18052256107330322, loss : -0.07517953217029572
traing iter: 183, test accuracy : 0.18052256107330322, loss : -0.07574253529310226
traing iter: 184, test accuracy : 0.18052256107330322, loss : -0.07624218612909317
traing iter: 185, test accuracy : 0.18052256107330322, loss : -0.07668038457632065
traing iter: 186, test accuracy : 0.18052256107330322, loss : -0.07705892622470856
traing iter: 187, test accuracy : 0.18052256107330322, loss : -0.07738093286752701
traing iter: 188, test accuracy : 0.18052256107330322, loss : -0.07764744758605957
traing iter: 189, test accuracy : 0.18052256107330322, loss : -0.07786049693822861
traing iter: 190, test accuracy : 0.18052256107330322, loss : -0.078022301197052
traing iter: 191, test accuracy : 0.18052256107330322, loss : -0.07813508808612823
traing iter: 192, test accuracy : 0.18052256107330322, loss : -0.07819987088441849
traing iter: 193, test accuracy : 0.18052256107330322, loss : -0.07821857929229736
traing iter: 194, test accuracy : 0.18052256107330322, loss : -0.07819265872240067
traing iter: 195, test accuracy : 0.18052256107330322, loss : -0.07812502235174179
traing iter: 196, test accuracy : 0.18052256107330322, loss : -0.07801615446805954
traing iter: 197, test accuracy : 0.18052256107330322, loss : -0.07786814868450165
traing iter: 198, test accuracy : 0.18052256107330322, loss : -0.07768278568983078
traing iter: 199, test accuracy : 0.18052256107330322, loss : -0.07746139913797379
traing iter: 200, test accuracy : 0.18052256107330322, loss : -0.07720571756362915
traing iter: 201, test accuracy : 0.18052256107330322, loss : -0.07691645622253418
traing iter: 202, test accuracy : 0.18052256107330322, loss : -0.07659582793712616
traing iter: 203, test accuracy : 0.18052256107330322, loss : -0.07624495029449463
traing iter: 204, test accuracy : 0.18052256107330322, loss : -0.07586495578289032
traing iter: 205, test accuracy : 0.18052256107330322, loss : -0.0754581168293953
traing iter: 206, test accuracy : 0.18052256107330322, loss : -0.07502477616071701
traing iter: 207, test accuracy : 0.18052256107330322, loss : -0.0745663046836853
traing iter: 208, test accuracy : 0.18052256107330322, loss : -0.07408446073532104
traing iter: 209, test accuracy : 0.18052256107330322, loss : -0.07357922941446304
traing iter: 210, test accuracy : 0.18052256107330322, loss : -0.07305324822664261
traing iter: 211, test accuracy : 0.18052256107330322, loss : -0.07250723987817764
traing iter: 212, test accuracy : 0.18052256107330322, loss : -0.0719418153166771
traing iter: 213, test accuracy : 0.18052256107330322, loss : -0.07135853171348572
traing iter: 214, test accuracy : 0.18052256107330322, loss : -0.07075759023427963
traing iter: 215, test accuracy : 0.18052256107330322, loss : -0.07014109939336777
traing iter: 216, test accuracy : 0.18052256107330322, loss : -0.06950978189706802
traing iter: 217, test accuracy : 0.18052256107330322, loss : -0.06886371970176697
traing iter: 218, test accuracy : 0.18052256107330322, loss : -0.06820454448461533
traing iter: 219, test accuracy : 0.18052256107330322, loss : -0.06753383576869965
traing iter: 220, test accuracy : 0.18052256107330322, loss : -0.0668511614203453
traing iter: 221, test accuracy : 0.18052256107330322, loss : -0.06615811586380005
traing iter: 222, test accuracy : 0.18052256107330322, loss : -0.06545504927635193
traing iter: 223, test accuracy : 0.18052256107330322, loss : -0.06474266946315765
traing iter: 224, test accuracy : 0.18052256107330322, loss : -0.06402260065078735
traing iter: 225, test accuracy : 0.18052256107330322, loss : -0.06329485028982162
traing iter: 226, test accuracy : 0.18052256107330322, loss : -0.06256052106618881
traing iter: 227, test accuracy : 0.18052256107330322, loss : -0.06181925907731056
traing iter: 228, test accuracy : 0.18052256107330322, loss : -0.06107352674007416
traing iter: 229, test accuracy : 0.18052256107330322, loss : -0.06032247841358185
traing iter: 230, test accuracy : 0.18052256107330322, loss : -0.05956796929240227
traing iter: 231, test accuracy : 0.18052256107330322, loss : -0.05880892276763916
traing iter: 232, test accuracy : 0.18052256107330322, loss : -0.0580473430454731
traing iter: 233, test accuracy : 0.18052256107330322, loss : -0.057283416390419006
traing iter: 234, test accuracy : 0.18052256107330322, loss : -0.05651719868183136
traing iter: 235, test accuracy : 0.18052256107330322, loss : -0.05574985221028328
traing iter: 236, test accuracy : 0.18052256107330322, loss : -0.05498150736093521
traing iter: 237, test accuracy : 0.18052256107330322, loss : -0.05421300232410431
traing iter: 238, test accuracy : 0.18052256107330322, loss : -0.05344397947192192
traing iter: 239, test accuracy : 0.18052256107330322, loss : -0.05267596244812012
traing iter: 240, test accuracy : 0.18052256107330322, loss : -0.051908738911151886
traing iter: 241, test accuracy : 0.18052256107330322, loss : -0.05114242434501648
traing iter: 242, test accuracy : 0.18052256107330322, loss : -0.05037837475538254
traing iter: 243, test accuracy : 0.18052256107330322, loss : -0.04961588233709335
traing iter: 244, test accuracy : 0.18052256107330322, loss : -0.048856236040592194
traing iter: 245, test accuracy : 0.18052256107330322, loss : -0.04809919744729996
traing iter: 246, test accuracy : 0.18052256107330322, loss : -0.04734491556882858
traing iter: 247, test accuracy : 0.18052256107330322, loss : -0.046594373881816864
traing iter: 248, test accuracy : 0.18052256107330322, loss : -0.045847661793231964
traing iter: 249, test accuracy : 0.18052256107330322, loss : -0.04510471224784851
traing iter: 250, test accuracy : 0.18052256107330322, loss : -0.04436592012643814
traing iter: 251, test accuracy : 0.18052256107330322, loss : -0.04363199323415756
traing iter: 252, test accuracy : 0.18052256107330322, loss : -0.04290255159139633
traing iter: 253, test accuracy : 0.18052256107330322, loss : -0.04217810183763504
traing iter: 254, test accuracy : 0.18052256107330322, loss : -0.04145902022719383
traing iter: 255, test accuracy : 0.18052256107330322, loss : -0.040745168924331665
traing iter: 256, test accuracy : 0.18052256107330322, loss : -0.04003699868917465
traing iter: 257, test accuracy : 0.18052256107330322, loss : -0.03933443874120712
traing iter: 258, test accuracy : 0.18052256107330322, loss : -0.038638122379779816
traing iter: 259, test accuracy : 0.18052256107330322, loss : -0.03794777765870094
traing iter: 260, test accuracy : 0.18052256107330322, loss : -0.03726353123784065
traing iter: 261, test accuracy : 0.18052256107330322, loss : -0.036586061120033264
traing iter: 262, test accuracy : 0.18052256107330322, loss : -0.035915084183216095
traing iter: 263, test accuracy : 0.18052256107330322, loss : -0.035250455141067505
traing iter: 264, test accuracy : 0.18052256107330322, loss : -0.03459298610687256
traing iter: 265, test accuracy : 0.18052256107330322, loss : -0.03394236043095589
traing iter: 266, test accuracy : 0.18052256107330322, loss : -0.033298444002866745
traing iter: 267, test accuracy : 0.18052256107330322, loss : -0.03266187384724617
traing iter: 268, test accuracy : 0.18052256107330322, loss : -0.03203270584344864
traing iter: 269, test accuracy : 0.18052256107330322, loss : -0.031410589814186096
traing iter: 270, test accuracy : 0.18052256107330322, loss : -0.030795607715845108
traing iter: 271, test accuracy : 0.18052256107330322, loss : -0.030188273638486862
traing iter: 272, test accuracy : 0.18052256107330322, loss : -0.02958841621875763
traing iter: 273, test accuracy : 0.18052256107330322, loss : -0.028995685279369354
traing iter: 274, test accuracy : 0.18052256107330322, loss : -0.028410688042640686
traing iter: 275, test accuracy : 0.18052256107330322, loss : -0.027833130210638046
traing iter: 276, test accuracy : 0.18052256107330322, loss : -0.02726338803768158
traing iter: 277, test accuracy : 0.18052256107330322, loss : -0.02670123055577278
traing iter: 278, test accuracy : 0.18052256107330322, loss : -0.02614673227071762
traing iter: 279, test accuracy : 0.18052256107330322, loss : -0.025599848479032516
traing iter: 280, test accuracy : 0.18052256107330322, loss : -0.025060418993234634
traing iter: 281, test accuracy : 0.18052256107330322, loss : -0.024528808891773224
traing iter: 282, test accuracy : 0.18052256107330322, loss : -0.02400490641593933
traing iter: 283, test accuracy : 0.18052256107330322, loss : -0.023488491773605347
traing iter: 284, test accuracy : 0.18052256107330322, loss : -0.022979963570833206
traing iter: 285, test accuracy : 0.18052256107330322, loss : -0.02247888222336769
traing iter: 286, test accuracy : 0.18052256107330322, loss : -0.021985376253724098
traing iter: 287, test accuracy : 0.18052256107330322, loss : -0.02149956300854683
traing iter: 288, test accuracy : 0.18052256107330322, loss : -0.02102125622332096
traing iter: 289, test accuracy : 0.18052256107330322, loss : -0.02055053971707821
traing iter: 290, test accuracy : 0.18052256107330322, loss : -0.020087242126464844
traing iter: 291, test accuracy : 0.18052256107330322, loss : -0.01963147521018982
traing iter: 292, test accuracy : 0.18052256107330322, loss : -0.019183173775672913
traing iter: 293, test accuracy : 0.18052256107330322, loss : -0.018742157146334648
traing iter: 294, test accuracy : 0.18052256107330322, loss : -0.018308615311980247
traing iter: 295, test accuracy : 0.18052256107330322, loss : -0.017882268875837326
traing iter: 296, test accuracy : 0.18052256107330322, loss : -0.017463278025388718
traing iter: 297, test accuracy : 0.18052256107330322, loss : -0.017051348462700844
traing iter: 298, test accuracy : 0.18052256107330322, loss : -0.016646670177578926
traing iter: 299, test accuracy : 0.18052256107330322, loss : -0.01624903827905655

final test accuracy: 0.18052256107330322
best epoch's test accuracy: 0.36274176836013794
sitmo commented 7 years ago

Instead of reading data in binary mode (rb) you are now reading in text mode (r). On my machine the files are in binary format (I ran the download script in the data directory) and reading them in text mode would fail. I bet that's what's wrong: you read corrupt data.

btw I'm on an OSX machine.

I get the same performance ar the README:

traing iter: 598, test accuracy : 0.897862255573, loss : 0.568764030933
traing iter: 599, test accuracy : 0.899219572544, loss : 0.574430882931

final test accuracy: 0.899219572544
best epoch's test accuracy: 0.91177469492
zhaowenyi94 commented 7 years ago

I also use OSX, but I got an error if I use 'rb', did you went into this kind of error about and byte and string type?

  File "/Users/zhaowenichi/Downloads/LSTM-Human-Activity-Recognition-master/lstm.py", line 205, in <module>
    X_train = load_X(X_train_signals_paths)
  File "/Users/zhaowenichi/Downloads/LSTM-Human-Activity-Recognition-master/lstm.py", line 41, in load_X
    row.replace('  ', ' ').strip().split(' ') for row in file
  File "/Users/zhaowenichi/Downloads/LSTM-Human-Activity-Recognition-master/lstm.py", line 41, in <listcomp>
    row.replace('  ', ' ').strip().split(' ') for row in file
TypeError: a bytes-like object is required, not 'str'
sitmo commented 7 years ago

Turns out the data files are in text format and not binary so your "r" solution should work.

I'm using python 2.7,.. are you using python 3.5/3.6?

zhaowenyi94 commented 7 years ago

I use the Python 3.5 :( Do not know how to fix it On Thu, Mar 2, 2017 at 9:51 PM Thijs van den Berg notifications@github.com wrote:

Turns out the data files are in text format and not binary so your "r" solution should work.

I'm using python 2.7,.. are you using python 3.5/3.6?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition/issues/8#issuecomment-283776161, or mute the thread https://github.com/notifications/unsubscribe-auth/AGr84vvo3rVlKJrQwlDJKyqGNX360g_Fks5rhyu6gaJpZM4MRMH9 .

sitmo commented 7 years ago

strange, I have no problem with _file = open(signal_typepath, 'r') and python 3.5.2

guillaume-chevalier commented 7 years ago

The loss should not be negative for the cross entropy function since behind that is normally the label multiplying a log function of the prediction. That is a big hint.

The bug sits where you redefined the loss:

tf.nn.softmax_cross_entropy_with_logits(pred_Y, Y)) + l2    ===>>>    tf.nn.softmax_cross_entropy_with_logits(labels=pred_Y,logits= Y)) + l2

Indeed the API changed. The order of the arguments is wrong compared to your call in the new version. You inverted things: logits are predictions, not the other way around as in the old TensorFlow r0.11 API: https://www.tensorflow.org/versions/r0.11/api_docs/python/nn/classification#softmax_cross_entropy_with_logits

Inverting those arguments seems to be a common mistake: https://github.com/carpedm20/NTM-tensorflow/issues/17

You still might have other problems hidden somewhere, I only took a quick look. I would not have expected the loss function to yield negative numbers either, but for that I am not sure because I have not re-validated the math of inverting the arguments.

zhaowenyi94 commented 7 years ago

Yes, modified and works! Thanks!

AnSharypov commented 7 years ago

@zhaowenyi94 Hi! How did you modify?