So I'm running a 1080, and I've got 8gb vram. I can train textual inversions and hypernetworks no problem. Dreambooth eludes me, but that's not the point here.
I'm getting out of memory errors, but for the life of me I can't find any documentation about what the requirements of this extension are. There are also very few actual settings for the training itself. Looking up solutions I've seen people claim they can run this on as little as 4gb vram.
Is this true? Or is the VRAM requirement determined by the batch size or amount of images? It doesn't seem to work based on steps like every other training does, after all.
So I'm running a 1080, and I've got 8gb vram. I can train textual inversions and hypernetworks no problem. Dreambooth eludes me, but that's not the point here.
I'm getting out of memory errors, but for the life of me I can't find any documentation about what the requirements of this extension are. There are also very few actual settings for the training itself. Looking up solutions I've seen people claim they can run this on as little as 4gb vram.
Is this true? Or is the VRAM requirement determined by the batch size or amount of images? It doesn't seem to work based on steps like every other training does, after all.
Can this be clarified?