This is only for those who want to lower vram usage a bit (to be able to use larger batch size) when using tensorRT with sd webui, at a cost of accuracy.
As far as I can tell, the fp8 optimization (currently available in sd webui dev branch, under Settings/Optimizations) would slightly reduce vram usage when using with tensorRT (from 10.9G to 9.6G to train certain SDXL, compared with from 9.7G to 6.8G without tensorRT), because the tensorRT side still stores data in fp16. The vram usage would decrease further if tensorRT has option to store data in fp8 as well.
LoRA can't be converted to tensorrt under fp8 due to dtype cast issue. Here's a very temporarily and dirty fix to get it work. (in dev branch)
The idea is to add .half() to convert tensor dtype fp8 to fp16 to do calculation with other fp16 values. Also notice that cache fp16 weight for LoRA in Settings/Optimizations doesn't work in this fix, and therefore you need to apply more weight to the fp8 LoRA you used to achieve the same effect with LoRA in fp16.
By the way, if you check out sd webui dev branch which uses cu121, you can change to 9.0.1.post12.dev4 or the newer 9.2.0.post12.dev5 for cuda 12. (9.1.0.post12.dev4 building wheel failed in my pc, so I don't suggest it) Ensure to modify install.py to update the version number. (tensorRT still work even if you don't change)
This is only for those who want to lower vram usage a bit (to be able to use larger batch size) when using tensorRT with sd webui, at a cost of accuracy.
As far as I can tell, the fp8 optimization (currently available in sd webui dev branch, under Settings/Optimizations) would slightly reduce vram usage when using with tensorRT (from 10.9G to 9.6G to train certain SDXL, compared with from 9.7G to 6.8G without tensorRT), because the tensorRT side still stores data in fp16. The vram usage would decrease further if tensorRT has option to store data in fp8 as well.
LoRA can't be converted to tensorrt under fp8 due to dtype cast issue. Here's a very temporarily and dirty fix to get it work. (in dev branch)
In
model_helper.py
, line 178In
exporter.py
, line 80 and 82The idea is to add
.half()
to convert tensor dtype fp8 to fp16 to do calculation with other fp16 values. Also notice that cache fp16 weight for LoRA in Settings/Optimizations doesn't work in this fix, and therefore you need to apply more weight to the fp8 LoRA you used to achieve the same effect with LoRA in fp16.By the way, if you check out sd webui dev branch which uses cu121, you can change to
9.0.1.post12.dev4
or the newer9.2.0.post12.dev5
for cuda 12. (9.1.0.post12.dev4
building wheel failed in my pc, so I don't suggest it) Ensure to modifyinstall.py
to update the version number. (tensorRT still work even if you don't change)