Just pushing those updates I've been working on optimizing pdaggerq. Here's the quick rundown:
Speeding Things Up:
Swapped out std::map for std::unordered_map to speed up lookups.
Changed the looping structure over the maps to use the foreach syntax. The integral_types and amplitude_types are no longer used anymore.
Changed the signatures of most functions to pass-by-reference instead of pass-by-value. Also I added const specifications to those functions to make it explicit that the function should/shouldn't modify the value of an argument or the members of a class (the printing functions for example).
modified assignment and copy operators to better utilize the builtin constructors available to pdaggerq.
I added a pytest in test/test_pq.py to ensure that the generated equations are the same. This script will order each tensor in a string and then order each string alphabetically. This prevents tests from being marked as failed when just the order is off. I will need to add the capability to compare using generic labels as well.
I've also added a few notes on suggested changes for the future. Let me know what you think or if there's anything that needs a second look.
Just pushing those updates I've been working on optimizing pdaggerq. Here's the quick rundown:
Speeding Things Up:
integral_types
andamplitude_types
are no longer used anymore.pass-by-reference
instead ofpass-by-value
. Also I addedconst
specifications to those functions to make it explicit that the function should/shouldn't modify the value of an argument or the members of a class (the printing functions for example).test/test_pq.py
to ensure that the generated equations are the same. This script will order each tensor in a string and then order each string alphabetically. This prevents tests from being marked as failed when just the order is off. I will need to add the capability to compare using generic labels as well.I've also added a few notes on suggested changes for the future. Let me know what you think or if there's anything that needs a second look.
Thank you, Marcus