Closed turian closed 3 years ago
You can always access the underlying native tensor using the .raw
attribute (see the last line of the example):
# x should be a native tensor (see above)
# for example:
import torch
x = torch.tensor([1., 2., 3., 4., 5., 6.])
# Any native tensor can easily be turned into an EagerPy tensor
import eagerpy as ep
x = ep.astensor(x)
# Now we can perform any EagerPy operation
x = x.square()
# And convert the EagerPy tensor back into a native tensor
x = x.raw
# x will now again be a native tensor (e.g. a PyTorch tensor)
@jonasrauber Does that allow me to convert from tf1 tensors to pytorch tensors?
Sorry, seems I have misunderstood your question. No it doesn't. EagerPy's primary purpose is give you the ability to write code that works with PyTorch and TensorFlow, not to convert data from one to the other.
Having said that, your request is actually something I might be able to add.
@jonasrauber It seems to me that this kinda of feature, even tho it's not EagerPy's primary purpose, would give a lot of visibility to the package add attract users. I only learned about EagerPy because I was googling for an answer to my question.
True, EagerPy was born at a time when this was technically impossible. Now it should be possible, I think, I just don't have a use case for that myself right now and no one who would pay me for implementing it, so it's not really at the top of my agenda at the moment.
eagerpy seems like a great project, and after an upcoming deadline we might try porting out pytorch code to it.
Reading here about converting native pytorch and tf GPU tensors to eager tensors, could you please implement a ep.totensor method, creating a pytorch or tf GPU tensor?
We currently have to use a mix of pytorch and tensorflow models to evaluate our code :(
And yes, I can get them all running on the same GPU. I just want to avoid shuttling pytorch tensors to CPU so I can evaluate them in TF models.