Closed scarrazza closed 4 years ago
I'm not sure I understand the comparison you are making. You mean you tried using Lepage's Vegas integrator with both @tf.function and numpy and you got the same? It might be there is such an overhead from Lepage's that it doesn't matter what the integrand is?
No, I mean if we take the lepage_tf example and re-write the integrand with pure python and limit the number of threads to 1 for BLAS and OMP we get the same performance as the tf call. So I think tf translates the python instructions to tf instructions before executing the code.
Ahhh, that's what you meant with Lepage example. From what I read that's what I would expect yes, that tensorflow will make anything (anything that it knows how to translate) to tf. This is also true for input arguments, as far as I understood from the documentation a call with a python integer will be compiled as a call with a tensorflow constant for that given integer.
Ok, then we can close this issue, it was not 100% clear to me.
Another point I an trying to understand clearly, is the relationship between a pure numpy integrand (without @tf.function) and the tensorflow integrand. I did some tests with the lepage example and the performance between both looks identical, in particular the CPU usage for both runs are quite similar.
I have also tried to limit the numpy threads to 1 and observe the performance, I didn't get any deterioration, so I have the suspicious that tf converts the objects to tensors when mixing numpy with
@tf.function
calls