Open crazylyf opened 8 years ago
In SConstruct comment this line:
env.Append(CPPDEFINES={'CLSTM_ALL_TENSOR': '1'})
In earlier versions of Eigen, the matrix code was more optimized than the tensor code, but that isn't true anymore, so the matrix version isn't needed anymore.
Tom, So why is a588c8 (matrix only, no tensor) is more than 2.5 faster in training compared to the tip of the repo? Tested on my 4 years old PC, intel i5 - 4 cores, 8 GB RAM, without dedicated GPU.
The speed of matrix vs tensor depends crucially on the version of Eigen; the version that you have installed may be too old. Can you install Eigen from github source and then do the benchmarks again?
(Most of the work of optimizing Eigen these days goes into the tensor code, since that's what other deep learning frameworks use.)
I forgot to mention that I tested this with Eigen 3.3 rc1.
On Ubuntu 16.04 (gcc 5.1 5.4).
That sounds odd. I benchmarked both versions against each other before switching. Recent versions of Eigen really shouldn't have big differences in linear algebra performance between tensor and matrix; after all, the tensor code drives much of Google's TensorFlow.
Is it possible your matrix code is running multicore? Checking with htop while it's running should give you some idea.
I checked with htop. Only one cpu run 100%, the others are idle / <2% most of the time.
Hmmm... I'm not sure. At this point, all I can say is that when I made the switch, the two performed pretty much identically to each other, and basically gave the same performance as a good BLAS implementation. Whatever the cause is, it ought to be fixable without switching back to the Eigen matrix backend. I'll leave the bug open and see whether I can reproduce&find a quick fix.
Make sure you are testing cpu only without gpu involvement.
Weird, with the latest code from master I get step times of ~0.3s with the uw3-500 dataset, while with the a588c8 version they are between 6.5s and 10.5s, i.e. the matrix code is significantly slower. Both were compiled with eigen 3.3beta2 on Debian unstable.
edit: Using the latest eigen checkout, the performance difference still remains.
with the latest code from master I get step times of ~0.3s with the uw3-500 dataset
I get ~0.8s with the uw3-500 dataset
while with the a588c8 version they are between 6.5s and 10.5s
I get ~0.3s...
The slowness of the matrix code was due to me using debug=-1
(which enables amongst others -Ofast
). With the default the matrix version runs at ~0.3s and the tensor version is between 0.5 and 0.9s, i.e. the same as on your machine (I'm running this on an i5 clocked at 3.1ghz).
So it seems that to get comparable performance, the compiler flags have to be tuned for the tensor version.
What's the CPU in each case?
On Wed, Oct 12, 2016, 13:10 Johannes Baiter notifications@github.com wrote:
The slowness of the matrix code was due to me using debug=-1 (which enables amongst others -Ofast). With the default the matrix version runs at ~0.3s and the tensor version is between 0.5 and 0.9s.
So it seems that to get comparable performance, the compiler flags have to be tuned for the tensor version.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tmbdev/clstm/issues/70#issuecomment-253324551, or mute the thread https://github.com/notifications/unsubscribe-auth/AAUYPy4cNYHrnQxYDVD4vbhNspQaNgQiks5qzT7HgaJpZM4Hzjzs .
Here's an excerpt of my /proc/cpuinfo
:
vendor_id : GenuineIntel
cpu family : 6
model : 58
model name : Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
stepping : 9
microcode : 0x1c
cpu MHz : 2899.902
cache size : 3072 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 2
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm epb tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts
bugs :
bogomips : 4989.61
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
Note that there's a turbo mode where the CPU clock goes up to 3.1Ghz, which is what was active when I benchmarked clstm.
Compiler version:
g++ (Debian 6.2.0-5) 6.2.0 20160927
Copyright (C) 2016 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I can confirm that installing the latest code with scons debug=-1
, I get the same performance as the matrix based code.
My cpu:
model name : Intel(R) Core(TM) i5-3450 CPU @ 3.10GHz
stepping : 9
cpu MHz : 2462.683
cache size : 6144 KB
cpu cores : 4
gcc version 5.4.0
This issue has not been fixed yet. Every user of the master branch will suffer from the slowness, unless he/she builds with debug=-1
.
A fix in SConstruct is needed.
In the previous version, one can accelerate calculation using "usemat=1" during installation, which use matrix instead of tensor. But in the current version, the matrix alternative is not supported. Is this means the tensor version is as fast now? Why matrix part is removed?
Best