Closed Co-eus closed 4 years ago
We should also look into pythons multiprocessing to use pipes to exchange data between applications (Main-Controller with GUI, Send-Module, Visualizer, Sound-To-Light...) to get rid of runtime dependencies. This might also speed up things in general.
For testing the generators and effects I will write a small testsuite, which will just call all the effects with different parameters and log their CPU time. Maybe we can identify the bottlenecks of the generators and effects this way
Output of the testscript:
Starting test run to study the performance
Time used by g_corner_grow per frame in milliseconds: 0.8263
Time used by g_growing_sphere per frame in milliseconds: 283.615
Time used by g_planes per frame in milliseconds: 0.7372
Time used by g_shooting_star per frame in milliseconds: 334.2029
Time used by g_corner per frame in milliseconds: 1.1129
Time used by g_orbiter per frame in milliseconds: 386.8811
Time used by g_randomlines per frame in milliseconds: 1.2152
Time used by g_snake per frame in milliseconds: 1.1734
Time used by g_cube per frame in milliseconds: 12.3328
Time used by g_planes_falling per frame in milliseconds: 8.1245
Time used by g_random per frame in milliseconds: 2.6302
Time used by g_sphere per frame in milliseconds: 215.9358
Okay, all the problems can be easily solved by shifting the 10x10x10 for-loops of the slow routines in a fortran routine, which easily can be import via f2py. I just checked for_g_sphere, this are the times I got:
Starting test run to study the performance
Time used by fortran_test_sphere per frame in milliseconds: 0.0096
Time used by g_sphere per frame in milliseconds: 1.1528
Some other ideas for speed improvements:
Fortran implementation of world2vox is done, but untested yet!
-> works
at some point we should use the timeit function to track down the slow routines, and make them faster!
see for example here: https://scipy-cookbook.readthedocs.io/items/PerformancePython.html