In order to inline cached functions we must store the native and llvm layer intermediates, along with the complexities. This will reduce cache load time but should increase compiled function speed and predictability.
Approach
The optimal approach would be to regenerate the necessary intermediates from the information given at the native layer, but as it stands i don't think we don't have enough information to do so. So instead we track the necessary intermediates in the BinarySharedObject which gets read/written to disk.
How Has This Been Tested?
full pytest suite, debugging statements (now removed) to ensure the correct codepath is hit.
Types of changes
[ ] Bug fix (non-breaking change which fixes an issue)
[x] New feature (non-breaking change which adds functionality)
[ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
Checklist:
[x] My code follows the code style of this project.
[ ] My change requires a change to the documentation.
NB: depends on #440.
Motivation and Context
In order to inline cached functions we must store the native and llvm layer intermediates, along with the complexities. This will reduce cache load time but should increase compiled function speed and predictability.
Approach
The optimal approach would be to regenerate the necessary intermediates from the information given at the native layer, but as it stands i don't think we don't have enough information to do so. So instead we track the necessary intermediates in the BinarySharedObject which gets read/written to disk.
How Has This Been Tested?
full pytest suite, debugging statements (now removed) to ensure the correct codepath is hit.
Types of changes
Checklist: