rentruewang / koila

Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
https://koila.rentruewang.com
MIT License
1.82k stars 63 forks source link

Hi! We are building LazyTensor too! #8

Closed Krovatkin closed 2 years ago

Krovatkin commented 2 years ago

Hi! I'm with Pytorch team and it looks like we are also building something similar to koila here: https://github.com/pytorch/pytorch/tree/lazy_tensor_staging . We would love to connect and learn more about your work! If you are interested, could you please reply to this issue and drop me a line at k o r o v a i k o n AT gmail.com (no spaces obviously)

rentruewang commented 2 years ago

Hi, thanks for reaching out! I would love to share anything I know about this work! It would be awesome if your team implements lazy tensors directly in your library.

I saw shape inference in your source code. As I didn't have time to read over all the source code, I'm wondering, do our LazyTensors work the same way (by storing unevaluated functions or evaluated data)?