Open eserie opened 3 years ago
Well, I would say, it's mostly a bug in TensorFlow because it doesn't support ndim
in compiled functions:
Calling this fails:
@tf.function
def f(a):
return a.ndim
Calling this works:
@tf.function
def f(a):
return len(a.shape)
We run into this problem, because our matmul
implementation does an additional dimensionality check using ndim
.
@eserie I filed a bug in the TensorFlow repository. Let's see what they think. If you need a temporary workaround, you can comment out the shape checks that use ndim
or replace them with calls to shape
.
Thank you very much to have posted the issue in TensorFlow repository! Do you think it could be worth to have the shape implementation in eagerpy in order to be compatible with more versions of TensorFlow? We could come back later to the canonical ndim implementation once it’s corrected in TF?
Another remark, if compilation makes sens in eagerpy, we could made it available in a universal way through an argument ‘compile=True’ in ‘eager_function’ proposed in #34. What do you think about that ?
I have to say, I haven't really thought enough about compilation and I am not sure it can be abstracted away enough to unify it between TensorFlow, PyTorch, and JAX. I think it could be interesting, but it requires careful testing of all the special cases and limitations.
Thanks to #40 this is resolved, but I'll leave this issue open for now, while the TensorFlow project discusses what to do about it.
Let's consider a simple compiled function in tensorflow.
This bunch of code works.
However, its "universal" version :
does not work and raises the error:
(but it works if we comment the
@tf.function
)Let's notice that the equivalent thing with jax seems to work:
Is it a problem with the integration of eagerpy with tensorflow ?