Open DeadParrot opened 8 years ago
Take advantage of AVX
AVX increases the vectorization pipeline but the way EnergyPlus code is currently structured makes it difficult for the compiler to automatically and fully use newer instructions and bigger SIMD pipelines. This may change if optimized linear algebra libraries are used in EnergyPlus and code is refactored.
Migrated from UserVoice
Is the number of supporters from the EnergyPlus UserVoice forum somehow passed to this issues thread?
Auto-vectorization can improve performance for some loops. Because auto-vectorization reorders operations it violates strict IEEE floating point compliance so compilers require "fast math" options be specified to enable auto-vectorization of floating point loops. Recently, effort has been going into making EnergyPlus more "vectorization-friendly". Auto-vectorization makes it practical to get a large number of loops to vectorize so validating use of "fast math" options with EnergyPlus is of interest.
With "fast math" precision changes are probably neutral, on average. What is of more concern may be that floating point error handling, such as NaN detection/propagation, may be affected. It may not be enough to get floating point error handling only in debug builds. If there are indeed scenarios where "fast math" interferes with important floating point handling a solution may be to have a separate "fast" build that uses "fast math" options and performance switches such as setting the architecture hardware type so that more advanced vectorization pipelines, such as AVX, can be exploited. If this is the case it is recommended that Intel C++ be used for the Windows "fast" build as it offers significant performance/vectorization benefits.