Closed pauldhein closed 4 years ago
I'd like to preserve the option to generate the lambdas as they are right now as well (perhaps with a torch=False
kwarg to the GrFNGenerator class instantiation) - the current version is extremely easy to convert to LaTeX via SymPY.
@pauldhein I'm going to mark this as closed by #428 .
Background
Currently we have a GrFN specified by a set of lambdas and a JSON file that is created via my
GroundedFunctionNetwork
class. This GrFN is capable of processing a single set of inputs at a time. Currently our system takes about one second to evaluate~1000
samples through a GrFN.Problem
Unfortunately for model analysis we need to be able to evaluate many more samples much faster.
Solution
Update the lambdas to allow for batched processing of samples through the generated GrFN.
Method
PETPT
module. When using a CPU we can expect to be able to process~20,000,000
samples per second, and when using the GPU (as long as we have access to a GPU with enough memory) we can expect to process~30,000,000,000
samples per second onPETPT
.Implementation
Below is the original lambdas file for
PETPT
as well as the updated lambdas file I made that uses PyTorch to implement the functions inPETPT
. As you can see the amount of changes required are minimal due to the intelligent broadcasting semantics of PyTorch.Original PETPT Lambdas
Torch variant of PETPT Lambdas
@pratikbhd, @skdebray, @cl4yton let's be sure to discuss this at the PA meeting today.