Open jonesi13 opened 1 year ago
Does it correct itself if you adjust the sliders first? It shows in the graph the reserved peak should be only about 1 GB with HR enabled.
It does scale up and down, but the numbers are always high.
Could you provide the stats.json
that should be in the extension's folder? Those are the values the extension uses for calculations, could help with debugging
I don't believe the estimates are necessarily always high, but there is definitely something weird going on.
In the screenshots below, I only change the batch size value. My VRAM estimate reaches a peak with batch size 8, then actually starts to go down if I increase batch size even further, and becomes negative after a batch size of 15.
I found something similar when changing image resolution (hires fix off, batch size = 1). I kept width fixed at 512 and changed only height,
VRAM estimate decreases when I change height from 64 to 240, by as much as 1 GB. Then the estimates go up until I reach a height of 1680, when it starts to go down again.
So either I don't understand the function or it's just confusing because the estimate applies more to LoRa generation for me. With LoRa is at 16 batches end, more than this and I get OOM but with Text2Image I can easily set a batch size of 60 and i am at 20GB VRAM. But the estimate says:
Estimated VRAM usage: 659175.81 MB / 24576 MB (2682.19%) (3960 MB system + 595650.74 MB used) <<< without Hires
Estimated VRAM usage: 620408.34 MB / 24576 MB (2524.45%) (3960 MB system + 560407.58 MB used) <<< with Hires
Estimated VRAM usage: 12488.11 MB / 24576 MB (50.81%) (3960 MB system + 7752.83 MB used) <<< Hires with Batchsize 1 (one)
Yeah, it's not really accurate.
8GB estimate for single image.