This PR fixes a performance issue I encountered when uncapping PyTorch. The issue seems to manifest mostly in the inception examples where there are a large number of tensors of high dimensionality. The model evaluation in the execution engine spends over 90% of its time formatting tensors to strings even though verbose is set to false. This is because they are converted to strings regardless of whether they are actually printed. Not sure why this started manifesting on newer versions of PyTorch but it made these tests very slow. This PR just puts all prints behind and if verbose conditional. We should probably write a better logging facility going forward. Also, this code should probably be moved over to MDF branch I think.
This PR fixes a performance issue I encountered when uncapping PyTorch. The issue seems to manifest mostly in the inception examples where there are a large number of tensors of high dimensionality. The model evaluation in the execution engine spends over 90% of its time formatting tensors to strings even though verbose is set to false. This is because they are converted to strings regardless of whether they are actually printed. Not sure why this started manifesting on newer versions of PyTorch but it made these tests very slow. This PR just puts all prints behind and
if verbose
conditional. We should probably write a better logging facility going forward. Also, this code should probably be moved over to MDF branch I think.