LoRA training (dev branch) does not work in Google Colab (and Paperspace as well), resulting in CUDA OOM.
I adapted the same scripts featured from "official" notebook, but while this notebook works fine, my GUI adaptation results in VRAM overflow.
There must be something inherently wrong about my code. I need to figure it out... somehow.
Also possibly it is the root of the following problems: #122, #115, #112
Never mind, I accidentally used wrong method for preparing optimizer and LoRA layers.
It was recklessly to assume I wrote flawed code, ha-ha. One should know it's always perfect.
LoRA training (dev branch) does not work in Google Colab (and Paperspace as well), resulting in CUDA OOM. I adapted the same scripts featured from "official" notebook, but while this notebook works fine, my GUI adaptation results in VRAM overflow.
There must be something inherently wrong about my code. I need to figure it out... somehow. Also possibly it is the root of the following problems: #122, #115, #112