hughperkins / cltorch

An OpenCL backend for torch.
Other
289 stars 26 forks source link

Getting same performance with or without cltorch? #25

Closed aletote closed 8 years ago

aletote commented 8 years ago

Hi, I get the same speed. Everything is working fine, no errors, but no speed improvement whatsoever when using the gpu. I'm on a brand new macbook pro. Can you give me some advice?

ales-MacBook-Pro:char-rnn ale$ th train.lua -opencl 1 -gpuid 0 using OpenCL on GPU 0...
loading data files...
cutting off end of data so that the batches/sequences divide evenly reshaping tensor... data load done. Number of data batches in train: 423, val: 23, test: 0
vocab size: 65
creating an lstm with 2 layers
Using Apple , OpenCL platform: Apple Using OpenCL device: Iris Pro setting forget gate biases to 1 in LSTM layer 1 setting forget gate biases to 1 in LSTM layer 2 number of parameters in the model: 240321
cloning rnn cloning criterion
THClReduceAll.cl build log:

:9:10: warning: unused variable 'in1' float *in1 = &_in1; ^ :10:10: warning: unused variable 'out' float *out = &_out; ^ 1/21150 (epoch 0.002), train_loss = 4.19803724, grad/param norm = 5.1721e-01, time/batch = 0.57s 2/21150 (epoch 0.005), train_loss = 3.93712094, grad/param norm = 1.4679e+00, time/batch = 0.27s 3/21150 (epoch 0.007), train_loss = 3.43751774, grad/param norm = 9.5793e-01, time/batch = 0.25s 4/21150 (epoch 0.009), train_loss = 3.41289299, grad/param norm = 7.5153e-01, time/batch = 0.23s 5/21150 (epoch 0.012), train_loss = 3.33699639, grad/param norm = 6.9269e-01, time/batch = 0.24s 6/21150 (epoch 0.014), train_loss = 3.37105611, grad/param norm = 5.2300e-01, time/batch = 0.24s 7/21150 (epoch 0.017), train_loss = 3.36710169, grad/param norm = 4.3214e-01, time/batch = 0.24s 8/21150 (epoch 0.019), train_loss = 3.33051407, grad/param norm = 3.9960e-01, time/batch = 0.25s 9/21150 (epoch 0.021), train_loss = 3.29338821, grad/param norm = 3.8692e-01, time/batch = 0.25s 10/21150 (epoch 0.024), train_loss = 3.38265349, grad/param norm = 3.5570e-01, time/batch = 0.24s 11/21150 (epoch 0.026), train_loss = 3.30180837, grad/param norm = 3.5802e-01, time/batch = 0.21s 12/21150 (epoch 0.028), train_loss = 3.32234028, grad/param norm = 2.7511e-01, time/batch = 0.22s 13/21150 (epoch 0.031), train_loss = 3.30897652, grad/param norm = 2.4441e-01, time/batch = 0.21s 14/21150 (epoch 0.033), train_loss = 3.28692209, grad/param norm = 3.4632e-01, time/batch = 0.21s 15/21150 (epoch 0.035), train_loss = 3.36003187, grad/param norm = 3.9644e-01, time/batch = 0.21s 16/21150 (epoch 0.038), train_loss = 3.33848420, grad/param norm = 3.4806e-01, time/batch = 0.20s 17/21150 (epoch 0.040), train_loss = 3.29889108, grad/param norm = 3.9853e-01, time/batch = 0.20s 18/21150 (epoch 0.043), train_loss = 3.31901478, grad/param norm = 2.5557e-01, time/batch = 0.20s 19/21150 (epoch 0.045), train_loss = 3.30151819, grad/param norm = 2.5695e-01, time/batch = 0.20s 20/21150 (epoch 0.047), train_loss = 3.27959471, grad/param norm = 3.9650e-01, time/batch = 0.21s 21/21150 (epoch 0.050), train_loss = 3.32289038, grad/param norm = 4.0551e-01, time/batch = 0.21s 22/21150 (epoch 0.052), train_loss = 3.34279904, grad/param norm = 4.2532e-01, time/batch = 0.21s 23/21150 (epoch 0.054), train_loss = 3.34371620, grad/param norm = 3.1156e-01, time/batch = 0.20s 24/21150 (epoch 0.057), train_loss = 3.34361337, grad/param norm = 2.6665e-01, time/batch = 0.20s 25/21150 (epoch 0.059), train_loss = 3.38630153, grad/param norm = 2.8602e-01, time/batch = 0.20s 26/21150 (epoch 0.061), train_loss = 3.34342098, grad/param norm = 3.1997e-01, time/batch = 0.20s 27/21150 (epoch 0.064), train_loss = 3.29437441, grad/param norm = 3.1243e-01, time/batch = 0.20s 28/21150 (epoch 0.066), train_loss = 3.28385709, grad/param norm = 3.0503e-01, time/batch = 0.20s 29/21150 (epoch 0.069), train_loss = 3.27431360, grad/param norm = 2.9510e-01, time/batch = 0.21s 30/21150 (epoch 0.071), train_loss = 3.28938587, grad/param norm = 2.8415e-01, time/batch = 0.21s 31/21150 (epoch 0.073), train_loss = 3.33446110, grad/param norm = 3.1308e-01, time/batch = 0.21s 32/21150 (epoch 0.076), train_loss = 3.36726267, grad/param norm = 3.4115e-01, time/batch = 0.20s 33/21150 (epoch 0.078), train_loss = 3.29241379, grad/param norm = 3.8644e-01, time/batch = 0.21s 34/21150 (epoch 0.080), train_loss = 3.31882062, grad/param norm = 3.4167e-01, time/batch = 0.21s 35/21150 (epoch 0.083), train_loss = 3.30913388, grad/param norm = 2.8830e-01, time/batch = 0.20s 36/21150 (epoch 0.085), train_loss = 3.30367194, grad/param norm = 2.9479e-01, time/batch = 0.21s 37/21150 (epoch 0.087), train_loss = 3.30589630, grad/param norm = 3.3605e-01, time/batch = 0.21s 38/21150 (epoch 0.090), train_loss = 3.27835790, grad/param norm = 3.6932e-01, time/batch = 0.21s 39/21150 (epoch 0.092), train_loss = 3.33376216, grad/param norm = 4.5782e-01, time/batch = 0.21s 40/21150 (epoch 0.095), train_loss = 3.33414196, grad/param norm = 3.2741e-01, time/batch = 0.22s 41/21150 (epoch 0.097), train_loss = 3.33889602, grad/param norm = 2.2643e-01, time/batch = 0.20s 42/21150 (epoch 0.099), train_loss = 3.25347007, grad/param norm = 4.2083e-01, time/batch = 0.21s 43/21150 (epoch 0.102), train_loss = 3.30742478, grad/param norm = 7.4356e-01, time/batch = 0.21s 44/21150 (epoch 0.104), train_loss = 3.25784314, grad/param norm = 4.9779e-01, time/batch = 0.21s 45/21150 (epoch 0.106), train_loss = 3.33075873, grad/param norm = 3.2093e-01, time/batch = 0.21s 46/21150 (epoch 0.109), train_loss = 3.30681471, grad/param norm = 2.6679e-01, time/batch = 0.21s 47/21150 (epoch 0.111), train_loss = 3.32420676, grad/param norm = 2.7067e-01, time/batch = 0.21s 48/21150 (epoch 0.113), train_loss = 3.36881501, grad/param norm = 2.6440e-01, time/batch = 0.21s 49/21150 (epoch 0.116), train_loss = 3.30183160, grad/param norm = 3.3810e-01, time/batch = 0.21s ^Z [15]+ Stopped th train.lua -opencl 1 -gpuid 0 ales-MacBook-Pro:char-rnn ale$ th train.lua -opencl 1 -gpuid 1 using OpenCL on GPU 1... loading data files... cutting off end of data so that the batches/sequences divide evenly reshaping tensor... data load done. Number of data batches in train: 423, val: 23, test: 0 vocab size: 65 creating an lstm with 2 layers Using Apple , OpenCL platform: Apple Using OpenCL device: AMD Radeon R9 M370X Compute Engine setting forget gate biases to 1 in LSTM layer 1 setting forget gate biases to 1 in LSTM layer 2 number of parameters in the model: 240321 cloning rnn cloning criterion THClReduceAll.cl build log: :11:10: warning: unused variable 'in1' float *in1 = &_in1; ^ :12:10: warning: unused variable 'out' float *out = &_out; ^ 1/21150 (epoch 0.002), train_loss = 4.19803708, grad/param norm = 5.1721e-01, time/batch = 0.89s 2/21150 (epoch 0.005), train_loss = 3.93712081, grad/param norm = 1.4679e+00, time/batch = 0.20s 3/21150 (epoch 0.007), train_loss = 3.43751758, grad/param norm = 9.5793e-01, time/batch = 0.21s 4/21150 (epoch 0.009), train_loss = 3.41289288, grad/param norm = 7.5153e-01, time/batch = 0.21s 5/21150 (epoch 0.012), train_loss = 3.33699626, grad/param norm = 6.9269e-01, time/batch = 0.22s 6/21150 (epoch 0.014), train_loss = 3.37105595, grad/param norm = 5.2300e-01, time/batch = 0.22s 7/21150 (epoch 0.017), train_loss = 3.36710159, grad/param norm = 4.3214e-01, time/batch = 0.21s 8/21150 (epoch 0.019), train_loss = 3.33051396, grad/param norm = 3.9960e-01, time/batch = 0.22s 9/21150 (epoch 0.021), train_loss = 3.29338806, grad/param norm = 3.8692e-01, time/batch = 0.22s 10/21150 (epoch 0.024), train_loss = 3.38265328, grad/param norm = 3.5570e-01, time/batch = 0.22s 11/21150 (epoch 0.026), train_loss = 3.30180834, grad/param norm = 3.5802e-01, time/batch = 0.21s 12/21150 (epoch 0.028), train_loss = 3.32234014, grad/param norm = 2.7511e-01, time/batch = 0.21s 13/21150 (epoch 0.031), train_loss = 3.30897638, grad/param norm = 2.4441e-01, time/batch = 0.20s 14/21150 (epoch 0.033), train_loss = 3.28692197, grad/param norm = 3.4632e-01, time/batch = 0.20s 15/21150 (epoch 0.035), train_loss = 3.36003173, grad/param norm = 3.9644e-01, time/batch = 0.20s 16/21150 (epoch 0.038), train_loss = 3.33848409, grad/param norm = 3.4806e-01, time/batch = 0.21s 17/21150 (epoch 0.040), train_loss = 3.29889087, grad/param norm = 3.9853e-01, time/batch = 0.22s 18/21150 (epoch 0.043), train_loss = 3.31901459, grad/param norm = 2.5557e-01, time/batch = 0.20s 19/21150 (epoch 0.045), train_loss = 3.30151813, grad/param norm = 2.5695e-01, time/batch = 0.22s 20/21150 (epoch 0.047), train_loss = 3.27959461, grad/param norm = 3.9650e-01, time/batch = 0.21s 21/21150 (epoch 0.050), train_loss = 3.32289033, grad/param norm = 4.0551e-01, time/batch = 0.21s 22/21150 (epoch 0.052), train_loss = 3.34279893, grad/param norm = 4.2532e-01, time/batch = 0.21s 23/21150 (epoch 0.054), train_loss = 3.34371612, grad/param norm = 3.1156e-01, time/batch = 0.21s 24/21150 (epoch 0.057), train_loss = 3.34361327, grad/param norm = 2.6665e-01, time/batch = 0.22s 25/21150 (epoch 0.059), train_loss = 3.38630135, grad/param norm = 2.8602e-01, time/batch = 0.21s 26/21150 (epoch 0.061), train_loss = 3.34342085, grad/param norm = 3.1997e-01, time/batch = 0.21s 27/21150 (epoch 0.064), train_loss = 3.29437426, grad/param norm = 3.1243e-01, time/batch = 0.21s 28/21150 (epoch 0.066), train_loss = 3.28385698, grad/param norm = 3.0503e-01, time/batch = 0.21s 29/21150 (epoch 0.069), train_loss = 3.27431346, grad/param norm = 2.9510e-01, time/batch = 0.21s 30/21150 (epoch 0.071), train_loss = 3.28938577, grad/param norm = 2.8415e-01, time/batch = 0.21s 31/21150 (epoch 0.073), train_loss = 3.33446099, grad/param norm = 3.1308e-01, time/batch = 0.20s 32/21150 (epoch 0.076), train_loss = 3.36726261, grad/param norm = 3.4115e-01, time/batch = 0.21s 33/21150 (epoch 0.078), train_loss = 3.29241365, grad/param norm = 3.8644e-01, time/batch = 0.21s 34/21150 (epoch 0.080), train_loss = 3.31882050, grad/param norm = 3.4167e-01, time/batch = 0.21s 35/21150 (epoch 0.083), train_loss = 3.30913375, grad/param norm = 2.8830e-01, time/batch = 0.21s 36/21150 (epoch 0.085), train_loss = 3.30367175, grad/param norm = 2.9479e-01, time/batch = 0.22s 37/21150 (epoch 0.087), train_loss = 3.30589622, grad/param norm = 3.3605e-01, time/batch = 0.22s 38/21150 (epoch 0.090), train_loss = 3.27835778, grad/param norm = 3.6931e-01, time/batch = 0.20s 39/21150 (epoch 0.092), train_loss = 3.33376205, grad/param norm = 4.5782e-01, time/batch = 0.20s 40/21150 (epoch 0.095), train_loss = 3.33414185, grad/param norm = 3.2741e-01, time/batch = 0.21s ^Z [16]+ Stopped th train.lua -opencl 1 -gpuid 1
hughperkins commented 8 years ago

Yes, it's not very fast on this particular model unfortunately. Lots of kernel launches with only a few thousand floats each time. I think a way forward on this would be kernel fusion, which I started on, but temporarily put to one side for now. You can look at the code I have so far at https://github.com/hughperkins/clnn/tree/fused-modules , https://github.com/hughperkins/clnn/tree/fusibles , and/or https://github.com/hughperkins/clnn/tree/connectors , and a description of approximately how this works at https://github.com/torch/nngraph/issues/60#issuecomment-126917549 .

hughperkins commented 8 years ago

Closing this, since it's a question really, right?