rentruewang / koila

Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
https://koila.rentruewang.com
MIT License
1.82k stars 63 forks source link

Major overhaul #18

Open rentruewang opened 2 years ago

rentruewang commented 2 years ago

I'm planning on making a major overhaul, to simplify the code and make it more scalable.

Currently this project relies too much on checks to determine if an object is a LazyTensor or a torch.Tensor, however, it's not only difficult to maintain, but can also negatively affect performance.

I'm on my way to create a new wrapper for torch.Tensor that matches LazyTensor's API but executes immediately for internal use.

Also, I'm modifying the LazyTensor's API to match torch.Tensor's.

I'll be using this issue to track my progress.

Closes: #22 Closes: #25