rentruewang / koila

Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
https://rentruewang.com/koila/
Apache License 2.0
1.82k stars 62 forks source link

KeyError: 0 #20

Open tengfeixue-victor opened 2 years ago

tengfeixue-victor commented 2 years ago

Thanks for your nice work! I wrapped my input and label (feat, label) = lazy(feat, label, batch=0) Then I met the following error when running it.

File "/home/victor/anaconda3/envs/py38_tab/lib/python3.8/site-packages/koila/lazy.py", line 504, in lazy_forward out = LazyTensor(LazyFunction(func, shape_func)(*args, *kwargs)) File "/home/victor/anaconda3/envs/py38tab/lib/python3.8/site-packages/koila/lazy.py", line 51, in __call_\ prepass = self.prepass_func(args, **kwargs) File "/home/victor/anaconda3/envs/py38_tab/lib/python3.8/site-packages/koila/prepasses.py", line 286, in tranpose batch = b.map(lambda x: {dim0: dim1, dim1: dim0}[x]) File "/home/victor/anaconda3/envs/py38_tab/lib/python3.8/site-packages/koila/interfaces.py", line 78, in map index = func(self.index) File "/home/victor/anaconda3/envs/py38_tab/lib/python3.8/site-packages/koila/prepasses.py", line 286, in batch = b.map(lambda x: {dim0: dim1, dim1: dim0}[x]) KeyError: 0

rentruewang commented 2 years ago

Hi, thanks for the report!

Did you perhaps use the transpose function? That particular function is called whenever a LazyTensor is transposed.

However, it's a little bit unclear to me what exactly went wrong as that error happens inside your model's forward pass.

tengfeixue-victor commented 2 years ago

Hi, Thanks for your quick response! Yea, I use the transpose function. Is there any solution? Also, the squeeze function seems not work as well.

rentruewang commented 2 years ago

Hi, I believe there may be some bugs not covered by testing. I'll give it a closer look as soon as I can (likely the start of next week as I have a tight schedule this week). Do you have a minimal reproducible example available?