Open FredLunnon opened 7 years ago
i think rkern wrote this module for pedagogical purposes, so i am kind of abusing it with numeric applications. i dont have needs for high dimensional algebras (..yet) but this is an important problem to point out.
Fred,
if you run this,
import clifford as cf
import cProfile
cProfile.run('cf.Cl(7)', sort='cumtime')
it shows that the big slowdowns occur in _genTables
, which calls _gmtElement
. i think the _genTables
, which you may have insight about. it looks like it is written using for loops and evaluating each element to generate the table.
I don't have the actual numbers to hand here, but it might be worth mentioning that I have since run further tests comparing speeds excluding initialisation. For dimension n = 4 , clifford runs ~4x faster than ClifFred, which seems entirely reasonable given that Kern uses numpy whereas I'm doing everything in Python.
But for n = 6 , clifford runs only ~3x faster; for n = 8 , incredibly it runs ~100x slower! I cannot even begin to speculate what is going on there.
WFL
On Sat, Oct 1, 2016 at 2:54 PM, alex arsenovic notifications@github.com wrote:
Fred,
if you run this,
import clifford as cf import cProfile cProfile.run('cf.Cl(7)', sort='cumtime')
it shows that the big slowdowns occur in _genTables, which calls _gmtElement . i think the _genTables, which you may have insight about. it looks like it is written using for loops and evaluating each element to generate the table.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/arsenovic/clifford/issues/3#issuecomment-250913665, or mute the thread https://github.com/notifications/unsubscribe-auth/AG2wLrJMqzPmJH4iLiZJThdXlEE32lpvks5qvmYXgaJpZM4J5gaT .
@FredLunnon We have recently made a lot of changes to the performance of this project. High dimension algebra initialisation times are probably still pretty big but everything else should be a lot faster. It would be really interesting to see a scaling benchmark with the latest release of this library and clifred
Dear Hugo Hadfield (?)
I did spend a good deal of time a year or two back in an attempt to comparatively benchmark existing GA systems: but it proved difficult to produce any meaningful results, largely because both functionalities and performance in practice diverge so wildly.
I will keep your package in mind if and when I return to this project.
Regards, Fred Lunnon
On 7/16/18, hugohadfield notifications@github.com wrote:
@FredLunnon We have recently made a lot of changes to the performance of this project. High dimension algebra initialisation times are probably still pretty big but everything else should be a lot faster. It would be really interesting to see a scaling benchmark with the latest release of this library and clifred
-- You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub: https://github.com/pygae/clifford/issues/3#issuecomment-405167645
Hello @arsenovic @hugohadfield
We have high-dimensional applications in mind where we only make use of a subspace of grades. For example the Cl(n) Clifford algebra grows in dimensionality by O(2^n)
while using only blades of grade k = 1
and k = n - 1
grows linearly by O(2n)
.
I wrote a quick naive implementation just to test initiation times, the kr
argument is used to specify a list of desired grades.
https://github.com/Stefan-Endres/clifford/commit/81c78c3cd36b3d952ec652072167af786d874116
>>> cf.Cl(4, kr=[1, 3])
(Layout([1, 1, 1, 1], [(), (1,), (2,), (3,), (4,), (1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4)],
firstIdx=1,
names=['', 'e1', 'e2', 'e3', 'e4', 'e123', 'e124', 'e134', 'e234']),
{'e1': (1^e1), 'e2': (1^e2), 'e3': (1^e3), 'e4': (1^e4),
'e123': (1^e123), 'e124': (1^e124), 'e134': (1^e134), 'e234':
(1^e234)})
>>> cf.Cl(4)
(Layout([1, 1, 1, 1], [(), (1,), (2,), (3,), (4,), (1, 2), (1, 3), (1, 4), (2, 3), (2, 4), (3, 4), (1, 2, 3), (1, 2, 4), (1, 3, 4), (2, 3, 4), (1, 2, 3, 4)],
firstIdx=1,
names=['', 'e1', 'e2', 'e3', 'e4', 'e12', 'e13', 'e14', 'e23', 'e24', 'e34', 'e123', 'e124', 'e134', 'e234', 'e1234']),
{'e1': (1^e1), 'e2': (1^e2), 'e3': (1^e3), 'e4': (1^e4),
'e12': (1^e12), 'e13': (1^e13), 'e14': (1^e14), 'e23': (1^e23), 'e24': (1^e24), 'e34': (1^e34),
'e123': (1^e123), 'e124': (1^e124), 'e134': (1^e134), 'e234': (1^e234), 'e1234':
(1^e1234)})
Some basic operations such as mutlivector addition, grade projection etc. are preserved nicely. Others such as dual (in cases where the result is an initiated grade) and reflection are not.
The initiation time growth isn't as promising as I'd hoped for either, but could possibly be drastically improved with a less naive implementation. In particular 99.1% of the initiation is spent in this loop in _getEvenOdd
: https://github.com/pygae/clifford/blob/8793d3e4faf758770ae80d8d9e111574677772bc/clifford/__init__.py#L567
There is a comment in the code that states the loop runs for grade!
permutations, however, at the moment it is unclear to me how the rest of the algebra and code will be affected when I change the range of these permutations or if it is even still looping for all the canonical grades at the moment. Especially when supplied with more arbitrary grade ranges.
Initiation times:
cf.Cl(4)
Initiation time: 0.22497177124023438
cf.Cl(4, kr=[1, 3])
Initiation time: 0.0009617805480957031
cf.Cl(5)
Initiation time: 0.009763717651367188
cf.Cl(5, kr=[1, 4])
Initiation time: 0.0020651817321777344
cf.Cl(6)
Initiation time: 0.0555272102355957
cf.Cl(6, kr=[1, 5])
Initiation time: 0.010433673858642578
cf.Cl(7)
Initiation time: 0.41026782989501953
cf.Cl(7, kr=[1, 6])
Initiation time: 0.08707642555236816
cf.Cl(8)
Initiation time: 4.242865324020386
cf.Cl(8, kr=[1, 7])
Initiation time: 1.0086712837219238
cf.Cl(9)
Initiation time: 61.97481608390808
cf.Cl(9, kr=[1, 8])
Initiation time: 15.875498056411743
# cf.Cl(10) did not initiate after several minutes
f.Cl(10, kr=[1, 9])
Initiation time: 329.20880341529846
cProfile for cf.Cl(9, kr=[1, 8]
:
Name | Call Count | Time (ms) | Own Time (ms) |
---|---|---|---|
init | 1 | 23157 | 0 |
Cl | 1 | 23157 | 0 |
_genEvenOdd | 1 | 23155 | 973 |
modify_idx | 362880 | 20229 | 11265 |
typeof | 3265940 | 8964 | 1721 |
wrapper | 3265942 | 6188 | 1311 |
_typeof_int | 3265937 | 3374 | 1064 |
bit_length | 3265937 | 2310 | 1436 |
I realize this is a bit outside the scope of the developer's interest for this module, but I was hoping someone could at least point us in the right direction. Or alternatively to at least comment on if it is even theoretically possible to implement something like this for the module.
Thank you for reading in any case.
Regards, Stefan Endres
i think this feature is a good idea, you might copy your comment into a new issue called something like 'Support for Sparse Algebras ' . i will default to hugo for the advice on this.
Thank you for the reply. I made a new issue here #40 .
Initialisation timings Cl(6) 0.195 sec, Cl(7) 3.7 sec, Cl(8) 2.5 mins : building multiplication table as n -> n+1 costs 20x, 40x longer, instead of expected ~5.3x .
Why is it necessary to search an (evidently very large) table at all?
python